diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c')
-rw-r--r-- | drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c | 711 |
1 files changed, 711 insertions, 0 deletions
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c new file mode 100644 index 000000000..a5e3d1a88 --- /dev/null +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c @@ -0,0 +1,711 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2013-2018, 2021, The Linux Foundation. All rights reserved. + * + * RMNET Data MAP protocol + */ + +#include <linux/netdevice.h> +#include <linux/ip.h> +#include <linux/ipv6.h> +#include <net/ip6_checksum.h> +#include <linux/bitfield.h> +#include "rmnet_config.h" +#include "rmnet_map.h" +#include "rmnet_private.h" +#include "rmnet_vnd.h" + +#define RMNET_MAP_DEAGGR_SPACING 64 +#define RMNET_MAP_DEAGGR_HEADROOM (RMNET_MAP_DEAGGR_SPACING / 2) + +static __sum16 *rmnet_map_get_csum_field(unsigned char protocol, + const void *txporthdr) +{ + if (protocol == IPPROTO_TCP) + return &((struct tcphdr *)txporthdr)->check; + + if (protocol == IPPROTO_UDP) + return &((struct udphdr *)txporthdr)->check; + + return NULL; +} + +static int +rmnet_map_ipv4_dl_csum_trailer(struct sk_buff *skb, + struct rmnet_map_dl_csum_trailer *csum_trailer, + struct rmnet_priv *priv) +{ + struct iphdr *ip4h = (struct iphdr *)skb->data; + void *txporthdr = skb->data + ip4h->ihl * 4; + __sum16 *csum_field, pseudo_csum; + __sum16 ip_payload_csum; + + /* Computing the checksum over just the IPv4 header--including its + * checksum field--should yield 0. If it doesn't, the IP header + * is bad, so return an error and let the IP layer drop it. + */ + if (ip_fast_csum(ip4h, ip4h->ihl)) { + priv->stats.csum_ip4_header_bad++; + return -EINVAL; + } + + /* We don't support checksum offload on IPv4 fragments */ + if (ip_is_fragment(ip4h)) { + priv->stats.csum_fragmented_pkt++; + return -EOPNOTSUPP; + } + + /* Checksum offload is only supported for UDP and TCP protocols */ + csum_field = rmnet_map_get_csum_field(ip4h->protocol, txporthdr); + if (!csum_field) { + priv->stats.csum_err_invalid_transport++; + return -EPROTONOSUPPORT; + } + + /* RFC 768: UDP checksum is optional for IPv4, and is 0 if unused */ + if (!*csum_field && ip4h->protocol == IPPROTO_UDP) { + priv->stats.csum_skipped++; + return 0; + } + + /* The checksum value in the trailer is computed over the entire + * IP packet, including the IP header and payload. To derive the + * transport checksum from this, we first subract the contribution + * of the IP header from the trailer checksum. We then add the + * checksum computed over the pseudo header. + * + * We verified above that the IP header contributes zero to the + * trailer checksum. Therefore the checksum in the trailer is + * just the checksum computed over the IP payload. + + * If the IP payload arrives intact, adding the pseudo header + * checksum to the IP payload checksum will yield 0xffff (negative + * zero). This means the trailer checksum and the pseudo checksum + * are additive inverses of each other. Put another way, the + * message passes the checksum test if the trailer checksum value + * is the negated pseudo header checksum. + * + * Knowing this, we don't even need to examine the transport + * header checksum value; it is already accounted for in the + * checksum value found in the trailer. + */ + ip_payload_csum = csum_trailer->csum_value; + + pseudo_csum = csum_tcpudp_magic(ip4h->saddr, ip4h->daddr, + ntohs(ip4h->tot_len) - ip4h->ihl * 4, + ip4h->protocol, 0); + + /* The cast is required to ensure only the low 16 bits are examined */ + if (ip_payload_csum != (__sum16)~pseudo_csum) { + priv->stats.csum_validation_failed++; + return -EINVAL; + } + + priv->stats.csum_ok++; + return 0; +} + +#if IS_ENABLED(CONFIG_IPV6) +static int +rmnet_map_ipv6_dl_csum_trailer(struct sk_buff *skb, + struct rmnet_map_dl_csum_trailer *csum_trailer, + struct rmnet_priv *priv) +{ + struct ipv6hdr *ip6h = (struct ipv6hdr *)skb->data; + void *txporthdr = skb->data + sizeof(*ip6h); + __sum16 *csum_field, pseudo_csum; + __sum16 ip6_payload_csum; + __be16 ip_header_csum; + + /* Checksum offload is only supported for UDP and TCP protocols; + * the packet cannot include any IPv6 extension headers + */ + csum_field = rmnet_map_get_csum_field(ip6h->nexthdr, txporthdr); + if (!csum_field) { + priv->stats.csum_err_invalid_transport++; + return -EPROTONOSUPPORT; + } + + /* The checksum value in the trailer is computed over the entire + * IP packet, including the IP header and payload. To derive the + * transport checksum from this, we first subract the contribution + * of the IP header from the trailer checksum. We then add the + * checksum computed over the pseudo header. + */ + ip_header_csum = (__force __be16)ip_fast_csum(ip6h, sizeof(*ip6h) / 4); + ip6_payload_csum = csum16_sub(csum_trailer->csum_value, ip_header_csum); + + pseudo_csum = csum_ipv6_magic(&ip6h->saddr, &ip6h->daddr, + ntohs(ip6h->payload_len), + ip6h->nexthdr, 0); + + /* It's sufficient to compare the IP payload checksum with the + * negated pseudo checksum to determine whether the packet + * checksum was good. (See further explanation in comments + * in rmnet_map_ipv4_dl_csum_trailer()). + * + * The cast is required to ensure only the low 16 bits are + * examined. + */ + if (ip6_payload_csum != (__sum16)~pseudo_csum) { + priv->stats.csum_validation_failed++; + return -EINVAL; + } + + priv->stats.csum_ok++; + return 0; +} +#else +static int +rmnet_map_ipv6_dl_csum_trailer(struct sk_buff *skb, + struct rmnet_map_dl_csum_trailer *csum_trailer, + struct rmnet_priv *priv) +{ + return 0; +} +#endif + +static void rmnet_map_complement_ipv4_txporthdr_csum_field(struct iphdr *ip4h) +{ + void *txphdr; + u16 *csum; + + txphdr = (void *)ip4h + ip4h->ihl * 4; + + if (ip4h->protocol == IPPROTO_TCP || ip4h->protocol == IPPROTO_UDP) { + csum = (u16 *)rmnet_map_get_csum_field(ip4h->protocol, txphdr); + *csum = ~(*csum); + } +} + +static void +rmnet_map_ipv4_ul_csum_header(struct iphdr *iphdr, + struct rmnet_map_ul_csum_header *ul_header, + struct sk_buff *skb) +{ + u16 val; + + val = MAP_CSUM_UL_ENABLED_FLAG; + if (iphdr->protocol == IPPROTO_UDP) + val |= MAP_CSUM_UL_UDP_FLAG; + val |= skb->csum_offset & MAP_CSUM_UL_OFFSET_MASK; + + ul_header->csum_start_offset = htons(skb_network_header_len(skb)); + ul_header->csum_info = htons(val); + + skb->ip_summed = CHECKSUM_NONE; + + rmnet_map_complement_ipv4_txporthdr_csum_field(iphdr); +} + +#if IS_ENABLED(CONFIG_IPV6) +static void +rmnet_map_complement_ipv6_txporthdr_csum_field(struct ipv6hdr *ip6h) +{ + void *txphdr; + u16 *csum; + + txphdr = ip6h + 1; + + if (ip6h->nexthdr == IPPROTO_TCP || ip6h->nexthdr == IPPROTO_UDP) { + csum = (u16 *)rmnet_map_get_csum_field(ip6h->nexthdr, txphdr); + *csum = ~(*csum); + } +} + +static void +rmnet_map_ipv6_ul_csum_header(struct ipv6hdr *ipv6hdr, + struct rmnet_map_ul_csum_header *ul_header, + struct sk_buff *skb) +{ + u16 val; + + val = MAP_CSUM_UL_ENABLED_FLAG; + if (ipv6hdr->nexthdr == IPPROTO_UDP) + val |= MAP_CSUM_UL_UDP_FLAG; + val |= skb->csum_offset & MAP_CSUM_UL_OFFSET_MASK; + + ul_header->csum_start_offset = htons(skb_network_header_len(skb)); + ul_header->csum_info = htons(val); + + skb->ip_summed = CHECKSUM_NONE; + + rmnet_map_complement_ipv6_txporthdr_csum_field(ipv6hdr); +} +#else +static void +rmnet_map_ipv6_ul_csum_header(void *ip6hdr, + struct rmnet_map_ul_csum_header *ul_header, + struct sk_buff *skb) +{ +} +#endif + +static void rmnet_map_v5_checksum_uplink_packet(struct sk_buff *skb, + struct rmnet_port *port, + struct net_device *orig_dev) +{ + struct rmnet_priv *priv = netdev_priv(orig_dev); + struct rmnet_map_v5_csum_header *ul_header; + + ul_header = skb_push(skb, sizeof(*ul_header)); + memset(ul_header, 0, sizeof(*ul_header)); + ul_header->header_info = u8_encode_bits(RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD, + MAPV5_HDRINFO_HDR_TYPE_FMASK); + + if (skb->ip_summed == CHECKSUM_PARTIAL) { + void *iph = ip_hdr(skb); + __sum16 *check; + void *trans; + u8 proto; + + if (skb->protocol == htons(ETH_P_IP)) { + u16 ip_len = ((struct iphdr *)iph)->ihl * 4; + + proto = ((struct iphdr *)iph)->protocol; + trans = iph + ip_len; + } else if (IS_ENABLED(CONFIG_IPV6) && + skb->protocol == htons(ETH_P_IPV6)) { + u16 ip_len = sizeof(struct ipv6hdr); + + proto = ((struct ipv6hdr *)iph)->nexthdr; + trans = iph + ip_len; + } else { + priv->stats.csum_err_invalid_ip_version++; + goto sw_csum; + } + + check = rmnet_map_get_csum_field(proto, trans); + if (check) { + skb->ip_summed = CHECKSUM_NONE; + /* Ask for checksum offloading */ + ul_header->csum_info |= MAPV5_CSUMINFO_VALID_FLAG; + priv->stats.csum_hw++; + return; + } + } + +sw_csum: + priv->stats.csum_sw++; +} + +/* Adds MAP header to front of skb->data + * Padding is calculated and set appropriately in MAP header. Mux ID is + * initialized to 0. + */ +struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb, + int hdrlen, + struct rmnet_port *port, + int pad) +{ + struct rmnet_map_header *map_header; + u32 padding, map_datalen; + + map_datalen = skb->len - hdrlen; + map_header = (struct rmnet_map_header *) + skb_push(skb, sizeof(struct rmnet_map_header)); + memset(map_header, 0, sizeof(struct rmnet_map_header)); + + /* Set next_hdr bit for csum offload packets */ + if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV5) + map_header->flags |= MAP_NEXT_HEADER_FLAG; + + if (pad == RMNET_MAP_NO_PAD_BYTES) { + map_header->pkt_len = htons(map_datalen); + return map_header; + } + + BUILD_BUG_ON(MAP_PAD_LEN_MASK < 3); + padding = ALIGN(map_datalen, 4) - map_datalen; + + if (padding == 0) + goto done; + + if (skb_tailroom(skb) < padding) + return NULL; + + skb_put_zero(skb, padding); + +done: + map_header->pkt_len = htons(map_datalen + padding); + /* This is a data packet, so the CMD bit is 0 */ + map_header->flags = padding & MAP_PAD_LEN_MASK; + + return map_header; +} + +/* Deaggregates a single packet + * A whole new buffer is allocated for each portion of an aggregated frame. + * Caller should keep calling deaggregate() on the source skb until 0 is + * returned, indicating that there are no more packets to deaggregate. Caller + * is responsible for freeing the original skb. + */ +struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb, + struct rmnet_port *port) +{ + struct rmnet_map_v5_csum_header *next_hdr = NULL; + struct rmnet_map_header *maph; + void *data = skb->data; + struct sk_buff *skbn; + u8 nexthdr_type; + u32 packet_len; + + if (skb->len == 0) + return NULL; + + maph = (struct rmnet_map_header *)skb->data; + packet_len = ntohs(maph->pkt_len) + sizeof(*maph); + + if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV4) { + packet_len += sizeof(struct rmnet_map_dl_csum_trailer); + } else if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV5) { + if (!(maph->flags & MAP_CMD_FLAG)) { + packet_len += sizeof(*next_hdr); + if (maph->flags & MAP_NEXT_HEADER_FLAG) + next_hdr = data + sizeof(*maph); + else + /* Mapv5 data pkt without csum hdr is invalid */ + return NULL; + } + } + + if (((int)skb->len - (int)packet_len) < 0) + return NULL; + + /* Some hardware can send us empty frames. Catch them */ + if (!maph->pkt_len) + return NULL; + + if (next_hdr) { + nexthdr_type = u8_get_bits(next_hdr->header_info, + MAPV5_HDRINFO_HDR_TYPE_FMASK); + if (nexthdr_type != RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD) + return NULL; + } + + skbn = alloc_skb(packet_len + RMNET_MAP_DEAGGR_SPACING, GFP_ATOMIC); + if (!skbn) + return NULL; + + skb_reserve(skbn, RMNET_MAP_DEAGGR_HEADROOM); + skb_put(skbn, packet_len); + memcpy(skbn->data, skb->data, packet_len); + skb_pull(skb, packet_len); + + return skbn; +} + +/* Validates packet checksums. Function takes a pointer to + * the beginning of a buffer which contains the IP payload + + * padding + checksum trailer. + * Only IPv4 and IPv6 are supported along with TCP & UDP. + * Fragmented or tunneled packets are not supported. + */ +int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len) +{ + struct rmnet_priv *priv = netdev_priv(skb->dev); + struct rmnet_map_dl_csum_trailer *csum_trailer; + + if (unlikely(!(skb->dev->features & NETIF_F_RXCSUM))) { + priv->stats.csum_sw++; + return -EOPNOTSUPP; + } + + csum_trailer = (struct rmnet_map_dl_csum_trailer *)(skb->data + len); + + if (!(csum_trailer->flags & MAP_CSUM_DL_VALID_FLAG)) { + priv->stats.csum_valid_unset++; + return -EINVAL; + } + + if (skb->protocol == htons(ETH_P_IP)) + return rmnet_map_ipv4_dl_csum_trailer(skb, csum_trailer, priv); + + if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) + return rmnet_map_ipv6_dl_csum_trailer(skb, csum_trailer, priv); + + priv->stats.csum_err_invalid_ip_version++; + + return -EPROTONOSUPPORT; +} + +static void rmnet_map_v4_checksum_uplink_packet(struct sk_buff *skb, + struct net_device *orig_dev) +{ + struct rmnet_priv *priv = netdev_priv(orig_dev); + struct rmnet_map_ul_csum_header *ul_header; + void *iphdr; + + ul_header = (struct rmnet_map_ul_csum_header *) + skb_push(skb, sizeof(struct rmnet_map_ul_csum_header)); + + if (unlikely(!(orig_dev->features & + (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)))) + goto sw_csum; + + if (skb->ip_summed != CHECKSUM_PARTIAL) + goto sw_csum; + + iphdr = (char *)ul_header + + sizeof(struct rmnet_map_ul_csum_header); + + if (skb->protocol == htons(ETH_P_IP)) { + rmnet_map_ipv4_ul_csum_header(iphdr, ul_header, skb); + priv->stats.csum_hw++; + return; + } + + if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) { + rmnet_map_ipv6_ul_csum_header(iphdr, ul_header, skb); + priv->stats.csum_hw++; + return; + } + + priv->stats.csum_err_invalid_ip_version++; + +sw_csum: + memset(ul_header, 0, sizeof(*ul_header)); + + priv->stats.csum_sw++; +} + +/* Generates UL checksum meta info header for IPv4 and IPv6 over TCP and UDP + * packets that are supported for UL checksum offload. + */ +void rmnet_map_checksum_uplink_packet(struct sk_buff *skb, + struct rmnet_port *port, + struct net_device *orig_dev, + int csum_type) +{ + switch (csum_type) { + case RMNET_FLAGS_EGRESS_MAP_CKSUMV4: + rmnet_map_v4_checksum_uplink_packet(skb, orig_dev); + break; + case RMNET_FLAGS_EGRESS_MAP_CKSUMV5: + rmnet_map_v5_checksum_uplink_packet(skb, port, orig_dev); + break; + default: + break; + } +} + +/* Process a MAPv5 packet header */ +int rmnet_map_process_next_hdr_packet(struct sk_buff *skb, + u16 len) +{ + struct rmnet_priv *priv = netdev_priv(skb->dev); + struct rmnet_map_v5_csum_header *next_hdr; + u8 nexthdr_type; + + next_hdr = (struct rmnet_map_v5_csum_header *)(skb->data + + sizeof(struct rmnet_map_header)); + + nexthdr_type = u8_get_bits(next_hdr->header_info, + MAPV5_HDRINFO_HDR_TYPE_FMASK); + + if (nexthdr_type != RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD) + return -EINVAL; + + if (unlikely(!(skb->dev->features & NETIF_F_RXCSUM))) { + priv->stats.csum_sw++; + } else if (next_hdr->csum_info & MAPV5_CSUMINFO_VALID_FLAG) { + priv->stats.csum_ok++; + skb->ip_summed = CHECKSUM_UNNECESSARY; + } else { + priv->stats.csum_valid_unset++; + } + + /* Pull csum v5 header */ + skb_pull(skb, sizeof(*next_hdr)); + + return 0; +} + +#define RMNET_AGG_BYPASS_TIME_NSEC 10000000L + +static void reset_aggr_params(struct rmnet_port *port) +{ + port->skbagg_head = NULL; + port->agg_count = 0; + port->agg_state = 0; + memset(&port->agg_time, 0, sizeof(struct timespec64)); +} + +static void rmnet_send_skb(struct rmnet_port *port, struct sk_buff *skb) +{ + if (skb_needs_linearize(skb, port->dev->features)) { + if (unlikely(__skb_linearize(skb))) { + struct rmnet_priv *priv; + + priv = netdev_priv(port->rmnet_dev); + this_cpu_inc(priv->pcpu_stats->stats.tx_drops); + dev_kfree_skb_any(skb); + return; + } + } + + dev_queue_xmit(skb); +} + +static void rmnet_map_flush_tx_packet_work(struct work_struct *work) +{ + struct sk_buff *skb = NULL; + struct rmnet_port *port; + + port = container_of(work, struct rmnet_port, agg_wq); + + spin_lock_bh(&port->agg_lock); + if (likely(port->agg_state == -EINPROGRESS)) { + /* Buffer may have already been shipped out */ + if (likely(port->skbagg_head)) { + skb = port->skbagg_head; + reset_aggr_params(port); + } + port->agg_state = 0; + } + + spin_unlock_bh(&port->agg_lock); + if (skb) + rmnet_send_skb(port, skb); +} + +static enum hrtimer_restart rmnet_map_flush_tx_packet_queue(struct hrtimer *t) +{ + struct rmnet_port *port; + + port = container_of(t, struct rmnet_port, hrtimer); + + schedule_work(&port->agg_wq); + + return HRTIMER_NORESTART; +} + +unsigned int rmnet_map_tx_aggregate(struct sk_buff *skb, struct rmnet_port *port, + struct net_device *orig_dev) +{ + struct timespec64 diff, last; + unsigned int len = skb->len; + struct sk_buff *agg_skb; + int size; + + spin_lock_bh(&port->agg_lock); + memcpy(&last, &port->agg_last, sizeof(struct timespec64)); + ktime_get_real_ts64(&port->agg_last); + + if (!port->skbagg_head) { + /* Check to see if we should agg first. If the traffic is very + * sparse, don't aggregate. + */ +new_packet: + diff = timespec64_sub(port->agg_last, last); + size = port->egress_agg_params.bytes - skb->len; + + if (size < 0) { + /* dropped */ + spin_unlock_bh(&port->agg_lock); + return 0; + } + + if (diff.tv_sec > 0 || diff.tv_nsec > RMNET_AGG_BYPASS_TIME_NSEC || + size == 0) + goto no_aggr; + + port->skbagg_head = skb_copy_expand(skb, 0, size, GFP_ATOMIC); + if (!port->skbagg_head) + goto no_aggr; + + dev_kfree_skb_any(skb); + port->skbagg_head->protocol = htons(ETH_P_MAP); + port->agg_count = 1; + ktime_get_real_ts64(&port->agg_time); + skb_frag_list_init(port->skbagg_head); + goto schedule; + } + diff = timespec64_sub(port->agg_last, port->agg_time); + size = port->egress_agg_params.bytes - port->skbagg_head->len; + + if (skb->len > size) { + agg_skb = port->skbagg_head; + reset_aggr_params(port); + spin_unlock_bh(&port->agg_lock); + hrtimer_cancel(&port->hrtimer); + rmnet_send_skb(port, agg_skb); + spin_lock_bh(&port->agg_lock); + goto new_packet; + } + + if (skb_has_frag_list(port->skbagg_head)) + port->skbagg_tail->next = skb; + else + skb_shinfo(port->skbagg_head)->frag_list = skb; + + port->skbagg_head->len += skb->len; + port->skbagg_head->data_len += skb->len; + port->skbagg_head->truesize += skb->truesize; + port->skbagg_tail = skb; + port->agg_count++; + + if (diff.tv_sec > 0 || diff.tv_nsec > port->egress_agg_params.time_nsec || + port->agg_count >= port->egress_agg_params.count || + port->skbagg_head->len == port->egress_agg_params.bytes) { + agg_skb = port->skbagg_head; + reset_aggr_params(port); + spin_unlock_bh(&port->agg_lock); + hrtimer_cancel(&port->hrtimer); + rmnet_send_skb(port, agg_skb); + return len; + } + +schedule: + if (!hrtimer_active(&port->hrtimer) && port->agg_state != -EINPROGRESS) { + port->agg_state = -EINPROGRESS; + hrtimer_start(&port->hrtimer, + ns_to_ktime(port->egress_agg_params.time_nsec), + HRTIMER_MODE_REL); + } + spin_unlock_bh(&port->agg_lock); + + return len; + +no_aggr: + spin_unlock_bh(&port->agg_lock); + skb->protocol = htons(ETH_P_MAP); + dev_queue_xmit(skb); + + return len; +} + +void rmnet_map_update_ul_agg_config(struct rmnet_port *port, u32 size, + u32 count, u32 time) +{ + spin_lock_bh(&port->agg_lock); + port->egress_agg_params.bytes = size; + WRITE_ONCE(port->egress_agg_params.count, count); + port->egress_agg_params.time_nsec = time * NSEC_PER_USEC; + spin_unlock_bh(&port->agg_lock); +} + +void rmnet_map_tx_aggregate_init(struct rmnet_port *port) +{ + hrtimer_init(&port->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + port->hrtimer.function = rmnet_map_flush_tx_packet_queue; + spin_lock_init(&port->agg_lock); + rmnet_map_update_ul_agg_config(port, 4096, 1, 800); + INIT_WORK(&port->agg_wq, rmnet_map_flush_tx_packet_work); +} + +void rmnet_map_tx_aggregate_exit(struct rmnet_port *port) +{ + hrtimer_cancel(&port->hrtimer); + cancel_work_sync(&port->agg_wq); + + spin_lock_bh(&port->agg_lock); + if (port->agg_state == -EINPROGRESS) { + if (port->skbagg_head) { + dev_kfree_skb_any(port->skbagg_head); + reset_aggr_params(port); + } + + port->agg_state = 0; + } + spin_unlock_bh(&port->agg_lock); +} |