diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /arch/sparc/lib/checksum_32.S | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'arch/sparc/lib/checksum_32.S')
-rw-r--r-- | arch/sparc/lib/checksum_32.S | 457 |
1 files changed, 457 insertions, 0 deletions
diff --git a/arch/sparc/lib/checksum_32.S b/arch/sparc/lib/checksum_32.S new file mode 100644 index 000000000..781e39b3c --- /dev/null +++ b/arch/sparc/lib/checksum_32.S @@ -0,0 +1,457 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* checksum.S: Sparc optimized checksum code. + * + * Copyright(C) 1995 Linus Torvalds + * Copyright(C) 1995 Miguel de Icaza + * Copyright(C) 1996 David S. Miller + * Copyright(C) 1997 Jakub Jelinek + * + * derived from: + * Linux/Alpha checksum c-code + * Linux/ix86 inline checksum assembly + * RFC1071 Computing the Internet Checksum (esp. Jacobsons m68k code) + * David Mosberger-Tang for optimized reference c-code + * BSD4.4 portable checksum routine + */ + +#include <asm/errno.h> +#include <asm/export.h> + +#define CSUM_BIGCHUNK(buf, offset, sum, t0, t1, t2, t3, t4, t5) \ + ldd [buf + offset + 0x00], t0; \ + ldd [buf + offset + 0x08], t2; \ + addxcc t0, sum, sum; \ + addxcc t1, sum, sum; \ + ldd [buf + offset + 0x10], t4; \ + addxcc t2, sum, sum; \ + addxcc t3, sum, sum; \ + ldd [buf + offset + 0x18], t0; \ + addxcc t4, sum, sum; \ + addxcc t5, sum, sum; \ + addxcc t0, sum, sum; \ + addxcc t1, sum, sum; + +#define CSUM_LASTCHUNK(buf, offset, sum, t0, t1, t2, t3) \ + ldd [buf - offset - 0x08], t0; \ + ldd [buf - offset - 0x00], t2; \ + addxcc t0, sum, sum; \ + addxcc t1, sum, sum; \ + addxcc t2, sum, sum; \ + addxcc t3, sum, sum; + + /* Do end cruft out of band to get better cache patterns. */ +csum_partial_end_cruft: + be 1f ! caller asks %o1 & 0x8 + andcc %o1, 4, %g0 ! nope, check for word remaining + ldd [%o0], %g2 ! load two + addcc %g2, %o2, %o2 ! add first word to sum + addxcc %g3, %o2, %o2 ! add second word as well + add %o0, 8, %o0 ! advance buf ptr + addx %g0, %o2, %o2 ! add in final carry + andcc %o1, 4, %g0 ! check again for word remaining +1: be 1f ! nope, skip this code + andcc %o1, 3, %o1 ! check for trailing bytes + ld [%o0], %g2 ! load it + addcc %g2, %o2, %o2 ! add to sum + add %o0, 4, %o0 ! advance buf ptr + addx %g0, %o2, %o2 ! add in final carry + andcc %o1, 3, %g0 ! check again for trailing bytes +1: be 1f ! no trailing bytes, return + addcc %o1, -1, %g0 ! only one byte remains? + bne 2f ! at least two bytes more + subcc %o1, 2, %o1 ! only two bytes more? + b 4f ! only one byte remains + or %g0, %g0, %o4 ! clear fake hword value +2: lduh [%o0], %o4 ! get hword + be 6f ! jmp if only hword remains + add %o0, 2, %o0 ! advance buf ptr either way + sll %o4, 16, %o4 ! create upper hword +4: ldub [%o0], %o5 ! get final byte + sll %o5, 8, %o5 ! put into place + or %o5, %o4, %o4 ! coalese with hword (if any) +6: addcc %o4, %o2, %o2 ! add to sum +1: retl ! get outta here + addx %g0, %o2, %o0 ! add final carry into retval + + /* Also do alignment out of band to get better cache patterns. */ +csum_partial_fix_alignment: + cmp %o1, 6 + bl cpte - 0x4 + andcc %o0, 0x2, %g0 + be 1f + andcc %o0, 0x4, %g0 + lduh [%o0 + 0x00], %g2 + sub %o1, 2, %o1 + add %o0, 2, %o0 + sll %g2, 16, %g2 + addcc %g2, %o2, %o2 + srl %o2, 16, %g3 + addx %g0, %g3, %g2 + sll %o2, 16, %o2 + sll %g2, 16, %g3 + srl %o2, 16, %o2 + andcc %o0, 0x4, %g0 + or %g3, %o2, %o2 +1: be cpa + andcc %o1, 0xffffff80, %o3 + ld [%o0 + 0x00], %g2 + sub %o1, 4, %o1 + addcc %g2, %o2, %o2 + add %o0, 4, %o0 + addx %g0, %o2, %o2 + b cpa + andcc %o1, 0xffffff80, %o3 + + /* The common case is to get called with a nicely aligned + * buffer of size 0x20. Follow the code path for that case. + */ + .globl csum_partial + EXPORT_SYMBOL(csum_partial) +csum_partial: /* %o0=buf, %o1=len, %o2=sum */ + andcc %o0, 0x7, %g0 ! alignment problems? + bne csum_partial_fix_alignment ! yep, handle it + sethi %hi(cpte - 8), %g7 ! prepare table jmp ptr + andcc %o1, 0xffffff80, %o3 ! num loop iterations +cpa: be 3f ! none to do + andcc %o1, 0x70, %g1 ! clears carry flag too +5: CSUM_BIGCHUNK(%o0, 0x00, %o2, %o4, %o5, %g2, %g3, %g4, %g5) + CSUM_BIGCHUNK(%o0, 0x20, %o2, %o4, %o5, %g2, %g3, %g4, %g5) + CSUM_BIGCHUNK(%o0, 0x40, %o2, %o4, %o5, %g2, %g3, %g4, %g5) + CSUM_BIGCHUNK(%o0, 0x60, %o2, %o4, %o5, %g2, %g3, %g4, %g5) + addx %g0, %o2, %o2 ! sink in final carry + subcc %o3, 128, %o3 ! detract from loop iters + bne 5b ! more to do + add %o0, 128, %o0 ! advance buf ptr + andcc %o1, 0x70, %g1 ! clears carry flag too +3: be cpte ! nope + andcc %o1, 0xf, %g0 ! anything left at all? + srl %g1, 1, %o4 ! compute offset + sub %g7, %g1, %g7 ! adjust jmp ptr + sub %g7, %o4, %g7 ! final jmp ptr adjust + jmp %g7 + %lo(cpte - 8) ! enter the table + add %o0, %g1, %o0 ! advance buf ptr +cptbl: CSUM_LASTCHUNK(%o0, 0x68, %o2, %g2, %g3, %g4, %g5) + CSUM_LASTCHUNK(%o0, 0x58, %o2, %g2, %g3, %g4, %g5) + CSUM_LASTCHUNK(%o0, 0x48, %o2, %g2, %g3, %g4, %g5) + CSUM_LASTCHUNK(%o0, 0x38, %o2, %g2, %g3, %g4, %g5) + CSUM_LASTCHUNK(%o0, 0x28, %o2, %g2, %g3, %g4, %g5) + CSUM_LASTCHUNK(%o0, 0x18, %o2, %g2, %g3, %g4, %g5) + CSUM_LASTCHUNK(%o0, 0x08, %o2, %g2, %g3, %g4, %g5) + addx %g0, %o2, %o2 ! fetch final carry + andcc %o1, 0xf, %g0 ! anything left at all? +cpte: bne csum_partial_end_cruft ! yep, handle it + andcc %o1, 8, %g0 ! check how much +cpout: retl ! get outta here + mov %o2, %o0 ! return computed csum + +/* Work around cpp -rob */ +#define ALLOC #alloc +#define EXECINSTR #execinstr +#define EX(x,y) \ +98: x,y; \ + .section __ex_table,ALLOC; \ + .align 4; \ + .word 98b, cc_fault; \ + .text; \ + .align 4 + + /* This aligned version executes typically in 8.5 superscalar cycles, this + * is the best I can do. I say 8.5 because the final add will pair with + * the next ldd in the main unrolled loop. Thus the pipe is always full. + * If you change these macros (including order of instructions), + * please check the fixup code below as well. + */ +#define CSUMCOPY_BIGCHUNK_ALIGNED(src, dst, sum, off, t0, t1, t2, t3, t4, t5, t6, t7) \ + EX(ldd [src + off + 0x00], t0); \ + EX(ldd [src + off + 0x08], t2); \ + addxcc t0, sum, sum; \ + EX(ldd [src + off + 0x10], t4); \ + addxcc t1, sum, sum; \ + EX(ldd [src + off + 0x18], t6); \ + addxcc t2, sum, sum; \ + EX(std t0, [dst + off + 0x00]); \ + addxcc t3, sum, sum; \ + EX(std t2, [dst + off + 0x08]); \ + addxcc t4, sum, sum; \ + EX(std t4, [dst + off + 0x10]); \ + addxcc t5, sum, sum; \ + EX(std t6, [dst + off + 0x18]); \ + addxcc t6, sum, sum; \ + addxcc t7, sum, sum; + + /* 12 superscalar cycles seems to be the limit for this case, + * because of this we thus do all the ldd's together to get + * Viking MXCC into streaming mode. Ho hum... + */ +#define CSUMCOPY_BIGCHUNK(src, dst, sum, off, t0, t1, t2, t3, t4, t5, t6, t7) \ + EX(ldd [src + off + 0x00], t0); \ + EX(ldd [src + off + 0x08], t2); \ + EX(ldd [src + off + 0x10], t4); \ + EX(ldd [src + off + 0x18], t6); \ + EX(st t0, [dst + off + 0x00]); \ + addxcc t0, sum, sum; \ + EX(st t1, [dst + off + 0x04]); \ + addxcc t1, sum, sum; \ + EX(st t2, [dst + off + 0x08]); \ + addxcc t2, sum, sum; \ + EX(st t3, [dst + off + 0x0c]); \ + addxcc t3, sum, sum; \ + EX(st t4, [dst + off + 0x10]); \ + addxcc t4, sum, sum; \ + EX(st t5, [dst + off + 0x14]); \ + addxcc t5, sum, sum; \ + EX(st t6, [dst + off + 0x18]); \ + addxcc t6, sum, sum; \ + EX(st t7, [dst + off + 0x1c]); \ + addxcc t7, sum, sum; + + /* Yuck, 6 superscalar cycles... */ +#define CSUMCOPY_LASTCHUNK(src, dst, sum, off, t0, t1, t2, t3) \ + EX(ldd [src - off - 0x08], t0); \ + EX(ldd [src - off - 0x00], t2); \ + addxcc t0, sum, sum; \ + EX(st t0, [dst - off - 0x08]); \ + addxcc t1, sum, sum; \ + EX(st t1, [dst - off - 0x04]); \ + addxcc t2, sum, sum; \ + EX(st t2, [dst - off - 0x00]); \ + addxcc t3, sum, sum; \ + EX(st t3, [dst - off + 0x04]); + + /* Handle the end cruft code out of band for better cache patterns. */ +cc_end_cruft: + be 1f + andcc %o3, 4, %g0 + EX(ldd [%o0 + 0x00], %g2) + add %o1, 8, %o1 + addcc %g2, %g7, %g7 + add %o0, 8, %o0 + addxcc %g3, %g7, %g7 + EX(st %g2, [%o1 - 0x08]) + addx %g0, %g7, %g7 + andcc %o3, 4, %g0 + EX(st %g3, [%o1 - 0x04]) +1: be 1f + andcc %o3, 3, %o3 + EX(ld [%o0 + 0x00], %g2) + add %o1, 4, %o1 + addcc %g2, %g7, %g7 + EX(st %g2, [%o1 - 0x04]) + addx %g0, %g7, %g7 + andcc %o3, 3, %g0 + add %o0, 4, %o0 +1: be 1f + addcc %o3, -1, %g0 + bne 2f + subcc %o3, 2, %o3 + b 4f + or %g0, %g0, %o4 +2: EX(lduh [%o0 + 0x00], %o4) + add %o0, 2, %o0 + EX(sth %o4, [%o1 + 0x00]) + be 6f + add %o1, 2, %o1 + sll %o4, 16, %o4 +4: EX(ldub [%o0 + 0x00], %o5) + EX(stb %o5, [%o1 + 0x00]) + sll %o5, 8, %o5 + or %o5, %o4, %o4 +6: addcc %o4, %g7, %g7 +1: retl + addx %g0, %g7, %o0 + + /* Also, handle the alignment code out of band. */ +cc_dword_align: + cmp %g1, 16 + bge 1f + srl %g1, 1, %o3 +2: cmp %o3, 0 + be,a ccte + andcc %g1, 0xf, %o3 + andcc %o3, %o0, %g0 ! Check %o0 only (%o1 has the same last 2 bits) + be,a 2b + srl %o3, 1, %o3 +1: andcc %o0, 0x1, %g0 + bne ccslow + andcc %o0, 0x2, %g0 + be 1f + andcc %o0, 0x4, %g0 + EX(lduh [%o0 + 0x00], %g4) + sub %g1, 2, %g1 + EX(sth %g4, [%o1 + 0x00]) + add %o0, 2, %o0 + sll %g4, 16, %g4 + addcc %g4, %g7, %g7 + add %o1, 2, %o1 + srl %g7, 16, %g3 + addx %g0, %g3, %g4 + sll %g7, 16, %g7 + sll %g4, 16, %g3 + srl %g7, 16, %g7 + andcc %o0, 0x4, %g0 + or %g3, %g7, %g7 +1: be 3f + andcc %g1, 0xffffff80, %g0 + EX(ld [%o0 + 0x00], %g4) + sub %g1, 4, %g1 + EX(st %g4, [%o1 + 0x00]) + add %o0, 4, %o0 + addcc %g4, %g7, %g7 + add %o1, 4, %o1 + addx %g0, %g7, %g7 + b 3f + andcc %g1, 0xffffff80, %g0 + + /* Sun, you just can't beat me, you just can't. Stop trying, + * give up. I'm serious, I am going to kick the living shit + * out of you, game over, lights out. + */ + .align 8 + .globl __csum_partial_copy_sparc_generic + EXPORT_SYMBOL(__csum_partial_copy_sparc_generic) +__csum_partial_copy_sparc_generic: + /* %o0=src, %o1=dest, %g1=len, %g7=sum */ + xor %o0, %o1, %o4 ! get changing bits + andcc %o4, 3, %g0 ! check for mismatched alignment + bne ccslow ! better this than unaligned/fixups + andcc %o0, 7, %g0 ! need to align things? + bne cc_dword_align ! yes, we check for short lengths there + andcc %g1, 0xffffff80, %g0 ! can we use unrolled loop? +3: be 3f ! nope, less than one loop remains + andcc %o1, 4, %g0 ! dest aligned on 4 or 8 byte boundary? + be ccdbl + 4 ! 8 byte aligned, kick ass +5: CSUMCOPY_BIGCHUNK(%o0,%o1,%g7,0x00,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3) + CSUMCOPY_BIGCHUNK(%o0,%o1,%g7,0x20,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3) + CSUMCOPY_BIGCHUNK(%o0,%o1,%g7,0x40,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3) + CSUMCOPY_BIGCHUNK(%o0,%o1,%g7,0x60,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3) + sub %g1, 128, %g1 ! detract from length + addx %g0, %g7, %g7 ! add in last carry bit + andcc %g1, 0xffffff80, %g0 ! more to csum? + add %o0, 128, %o0 ! advance src ptr + bne 5b ! we did not go negative, continue looping + add %o1, 128, %o1 ! advance dest ptr +3: andcc %g1, 0x70, %o2 ! can use table? +ccmerge:be ccte ! nope, go and check for end cruft + andcc %g1, 0xf, %o3 ! get low bits of length (clears carry btw) + srl %o2, 1, %o4 ! begin negative offset computation + sethi %hi(12f), %o5 ! set up table ptr end + add %o0, %o2, %o0 ! advance src ptr + sub %o5, %o4, %o5 ! continue table calculation + sll %o2, 1, %g2 ! constant multiplies are fun... + sub %o5, %g2, %o5 ! some more adjustments + jmp %o5 + %lo(12f) ! jump into it, duff style, wheee... + add %o1, %o2, %o1 ! advance dest ptr (carry is clear btw) +cctbl: CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x68,%g2,%g3,%g4,%g5) + CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x58,%g2,%g3,%g4,%g5) + CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x48,%g2,%g3,%g4,%g5) + CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x38,%g2,%g3,%g4,%g5) + CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x28,%g2,%g3,%g4,%g5) + CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x18,%g2,%g3,%g4,%g5) + CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x08,%g2,%g3,%g4,%g5) +12: addx %g0, %g7, %g7 + andcc %o3, 0xf, %g0 ! check for low bits set +ccte: bne cc_end_cruft ! something left, handle it out of band + andcc %o3, 8, %g0 ! begin checks for that code + retl ! return + mov %g7, %o0 ! give em the computed checksum +ccdbl: CSUMCOPY_BIGCHUNK_ALIGNED(%o0,%o1,%g7,0x00,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3) + CSUMCOPY_BIGCHUNK_ALIGNED(%o0,%o1,%g7,0x20,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3) + CSUMCOPY_BIGCHUNK_ALIGNED(%o0,%o1,%g7,0x40,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3) + CSUMCOPY_BIGCHUNK_ALIGNED(%o0,%o1,%g7,0x60,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3) + sub %g1, 128, %g1 ! detract from length + addx %g0, %g7, %g7 ! add in last carry bit + andcc %g1, 0xffffff80, %g0 ! more to csum? + add %o0, 128, %o0 ! advance src ptr + bne ccdbl ! we did not go negative, continue looping + add %o1, 128, %o1 ! advance dest ptr + b ccmerge ! finish it off, above + andcc %g1, 0x70, %o2 ! can use table? (clears carry btw) + +ccslow: cmp %g1, 0 + mov 0, %g5 + bleu 4f + andcc %o0, 1, %o5 + be,a 1f + srl %g1, 1, %g4 + sub %g1, 1, %g1 + EX(ldub [%o0], %g5) + add %o0, 1, %o0 + EX(stb %g5, [%o1]) + srl %g1, 1, %g4 + add %o1, 1, %o1 +1: cmp %g4, 0 + be,a 3f + andcc %g1, 1, %g0 + andcc %o0, 2, %g0 + be,a 1f + srl %g4, 1, %g4 + EX(lduh [%o0], %o4) + sub %g1, 2, %g1 + srl %o4, 8, %g2 + sub %g4, 1, %g4 + EX(stb %g2, [%o1]) + add %o4, %g5, %g5 + EX(stb %o4, [%o1 + 1]) + add %o0, 2, %o0 + srl %g4, 1, %g4 + add %o1, 2, %o1 +1: cmp %g4, 0 + be,a 2f + andcc %g1, 2, %g0 + EX(ld [%o0], %o4) +5: srl %o4, 24, %g2 + srl %o4, 16, %g3 + EX(stb %g2, [%o1]) + srl %o4, 8, %g2 + EX(stb %g3, [%o1 + 1]) + add %o0, 4, %o0 + EX(stb %g2, [%o1 + 2]) + addcc %o4, %g5, %g5 + EX(stb %o4, [%o1 + 3]) + addx %g5, %g0, %g5 ! I am now to lazy to optimize this (question it + add %o1, 4, %o1 ! is worthy). Maybe some day - with the sll/srl + subcc %g4, 1, %g4 ! tricks + bne,a 5b + EX(ld [%o0], %o4) + sll %g5, 16, %g2 + srl %g5, 16, %g5 + srl %g2, 16, %g2 + andcc %g1, 2, %g0 + add %g2, %g5, %g5 +2: be,a 3f + andcc %g1, 1, %g0 + EX(lduh [%o0], %o4) + andcc %g1, 1, %g0 + srl %o4, 8, %g2 + add %o0, 2, %o0 + EX(stb %g2, [%o1]) + add %g5, %o4, %g5 + EX(stb %o4, [%o1 + 1]) + add %o1, 2, %o1 +3: be,a 1f + sll %g5, 16, %o4 + EX(ldub [%o0], %g2) + sll %g2, 8, %o4 + EX(stb %g2, [%o1]) + add %g5, %o4, %g5 + sll %g5, 16, %o4 +1: addcc %o4, %g5, %g5 + srl %g5, 16, %o4 + addx %g0, %o4, %g5 + orcc %o5, %g0, %g0 + be 4f + srl %g5, 8, %o4 + and %g5, 0xff, %g2 + and %o4, 0xff, %o4 + sll %g2, 8, %g2 + or %g2, %o4, %g5 +4: addcc %g7, %g5, %g7 + retl + addx %g0, %g7, %o0 + +/* We do these strange calculations for the csum_*_from_user case only, ie. + * we only bother with faults on loads... */ + +cc_fault: + ret + clr %o0 |