From 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Tue, 21 Feb 2023 18:24:12 -0800 Subject: Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next Pull networking updates from Jakub Kicinski: "Core: - Add dedicated kmem_cache for typical/small skb->head, avoid having to access struct page at kfree time, and improve memory use. - Introduce sysctl to set default RPS configuration for new netdevs. - Define Netlink protocol specification format which can be used to describe messages used by each family and auto-generate parsers. Add tools for generating kernel data structures and uAPI headers. - Expose all net/core sysctls inside netns. - Remove 4s sleep in netpoll if carrier is instantly detected on boot. - Add configurable limit of MDB entries per port, and port-vlan. - Continue populating drop reasons throughout the stack. - Retire a handful of legacy Qdiscs and classifiers. Protocols: - Support IPv4 big TCP (TSO frames larger than 64kB). - Add IP_LOCAL_PORT_RANGE socket option, to control local port range on socket by socket basis. - Track and report in procfs number of MPTCP sockets used. - Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path manager. - IPv6: don't check net.ipv6.route.max_size and rely on garbage collection to free memory (similarly to IPv4). - Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986). - ICMP: add per-rate limit counters. - Add support for user scanning requests in ieee802154. - Remove static WEP support. - Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate reporting. - WiFi 7 EHT channel puncturing support (client & AP). BPF: - Add a rbtree data structure following the "next-gen data structure" precedent set by recently added linked list, that is, by using kfunc + kptr instead of adding a new BPF map type. - Expose XDP hints via kfuncs with initial support for RX hash and timestamp metadata. - Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to better support decap on GRE tunnel devices not operating in collect metadata. - Improve x86 JIT's codegen for PROBE_MEM runtime error checks. - Remove the need for trace_printk_lock for bpf_trace_printk and bpf_trace_vprintk helpers. - Extend libbpf's bpf_tracing.h support for tracing arguments of kprobes/uprobes and syscall as a special case. - Significantly reduce the search time for module symbols by livepatch and BPF. - Enable cpumasks to be used as kptrs, which is useful for tracing programs tracking which tasks end up running on which CPUs in different time intervals. - Add support for BPF trampoline on s390x and riscv64. - Add capability to export the XDP features supported by the NIC. - Add __bpf_kfunc tag for marking kernel functions as kfuncs. - Add cgroup.memory=nobpf kernel parameter option to disable BPF memory accounting for container environments. Netfilter: - Remove the CLUSTERIP target. It has been marked as obsolete for years, and we still have WARN splats wrt races of the out-of-band /proc interface installed by this target. - Add 'destroy' commands to nf_tables. They are identical to the existing 'delete' commands, but do not return an error if the referenced object (set, chain, rule...) did not exist. Driver API: - Improve cpumask_local_spread() locality to help NICs set the right IRQ affinity on AMD platforms. - Separate C22 and C45 MDIO bus transactions more clearly. - Introduce new DCB table to control DSCP rewrite on egress. - Support configuration of Physical Layer Collision Avoidance (PLCA) Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of shared medium Ethernet. - Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing preemption of low priority frames by high priority frames. - Add support for controlling MACSec offload using netlink SET. - Rework devlink instance refcounts to allow registration and de-registration under the instance lock. Split the code into multiple files, drop some of the unnecessarily granular locks and factor out common parts of netlink operation handling. - Add TX frame aggregation parameters (for USB drivers). - Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning messages with notifications for debug. - Allow offloading of UDP NEW connections via act_ct. - Add support for per action HW stats in TC. - Support hardware miss to TC action (continue processing in SW from a specific point in the action chain). - Warn if old Wireless Extension user space interface is used with modern cfg80211/mac80211 drivers. Do not support Wireless Extensions for Wi-Fi 7 devices at all. Everyone should switch to using nl80211 interface instead. - Improve the CAN bit timing configuration. Use extack to return error messages directly to user space, update the SJW handling, including the definition of a new default value that will benefit CAN-FD controllers, by increasing their oscillator tolerance. New hardware / drivers: - Ethernet: - nVidia BlueField-3 support (control traffic driver) - Ethernet support for imx93 SoCs - Motorcomm yt8531 gigabit Ethernet PHY - onsemi NCN26000 10BASE-T1S PHY (with support for PLCA) - Microchip LAN8841 PHY (incl. cable diagnostics and PTP) - Amlogic gxl MDIO mux - WiFi: - RealTek RTL8188EU (rtl8xxxu) - Qualcomm Wi-Fi 7 devices (ath12k) - CAN: - Renesas R-Car V4H Drivers: - Bluetooth: - Set Per Platform Antenna Gain (PPAG) for Intel controllers. - Ethernet NICs: - Intel (1G, igc): - support TSN / Qbv / packet scheduling features of i226 model - Intel (100G, ice): - use GNSS subsystem instead of TTY - multi-buffer XDP support - extend support for GPIO pins to E823 devices - nVidia/Mellanox: - update the shared buffer configuration on PFC commands - implement PTP adjphase function for HW offset control - TC support for Geneve and GRE with VF tunnel offload - more efficient crypto key management method - multi-port eswitch support - Netronome/Corigine: - add DCB IEEE support - support IPsec offloading for NFP3800 - Freescale/NXP (enetc): - support XDP_REDIRECT for XDP non-linear buffers - improve reconfig, avoid link flap and waiting for idle - support MAC Merge layer - Other NICs: - sfc/ef100: add basic devlink support for ef100 - ionic: rx_push mode operation (writing descriptors via MMIO) - bnxt: use the auxiliary bus abstraction for RDMA - r8169: disable ASPM and reset bus in case of tx timeout - cpsw: support QSGMII mode for J721e CPSW9G - cpts: support pulse-per-second output - ngbe: add an mdio bus driver - usbnet: optimize usbnet_bh() by avoiding unnecessary queuing - r8152: handle devices with FW with NCM support - amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation - virtio-net: support multi buffer XDP - virtio/vsock: replace virtio_vsock_pkt with sk_buff - tsnep: XDP support - Ethernet high-speed switches: - nVidia/Mellanox (mlxsw): - add support for latency TLV (in FW control messages) - Microchip (sparx5): - separate explicit and implicit traffic forwarding rules, make the implicit rules always active - add support for egress DSCP rewrite - IS0 VCAP support (Ingress Classification) - IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS etc.) - ES2 VCAP support (Egress Access Control) - support for Per-Stream Filtering and Policing (802.1Q, 8.6.5.1) - Ethernet embedded switches: - Marvell (mv88e6xxx): - add MAB (port auth) offload support - enable PTP receive for mv88e6390 - NXP (ocelot): - support MAC Merge layer - support for the the vsc7512 internal copper phys - Microchip: - lan9303: convert to PHYLINK - lan966x: support TC flower filter statistics - lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x - lan937x: support Credit Based Shaper configuration - ksz9477: support Energy Efficient Ethernet - other: - qca8k: convert to regmap read/write API, use bulk operations - rswitch: Improve TX timestamp accuracy - Intel WiFi (iwlwifi): - EHT (Wi-Fi 7) rate reporting - STEP equalizer support: transfer some STEP (connection to radio on platforms with integrated wifi) related parameters from the BIOS to the firmware. - Qualcomm 802.11ax WiFi (ath11k): - IPQ5018 support - Fine Timing Measurement (FTM) responder role support - channel 177 support - MediaTek WiFi (mt76): - per-PHY LED support - mt7996: EHT (Wi-Fi 7) support - Wireless Ethernet Dispatch (WED) reset support - switch to using page pool allocator - RealTek WiFi (rtw89): - support new version of Bluetooth co-existance - Mobile: - rmnet: support TX aggregation" * tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits) page_pool: add a comment explaining the fragment counter usage net: ethtool: fix __ethtool_dev_mm_supported() implementation ethtool: pse-pd: Fix double word in comments xsk: add linux/vmalloc.h to xsk.c sefltests: netdevsim: wait for devlink instance after netns removal selftest: fib_tests: Always cleanup before exit net/mlx5e: Align IPsec ASO result memory to be as required by hardware net/mlx5e: TC, Set CT miss to the specific ct action instance net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG net/mlx5: Refactor tc miss handling to a single function net/mlx5: Kconfig: Make tc offload depend on tc skb extension net/sched: flower: Support hardware miss to tc action net/sched: flower: Move filter handle initialization earlier net/sched: cls_api: Support hardware miss to tc action net/sched: Rename user cookie and act cookie sfc: fix builds without CONFIG_RTC_LIB sfc: clean up some inconsistent indentings net/mlx4_en: Introduce flexible array to silence overflow warning net: lan966x: Fix possible deadlock inside PTP net/ulp: Remove redundant ->clone() test in inet_clone_ulp(). ... --- drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 734 ++++++++++++++++++++++++++++++++++ 1 file changed, 734 insertions(+) create mode 100644 drivers/gpu/drm/msm/adreno/a6xx_hfi.c (limited to 'drivers/gpu/drm/msm/adreno/a6xx_hfi.c') diff --git a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c new file mode 100644 index 000000000..2cc83e049 --- /dev/null +++ b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c @@ -0,0 +1,734 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2017-2018 The Linux Foundation. All rights reserved. */ + +#include +#include +#include + +#include "a6xx_gmu.h" +#include "a6xx_gmu.xml.h" +#include "a6xx_gpu.h" + +#define HFI_MSG_ID(val) [val] = #val + +static const char * const a6xx_hfi_msg_id[] = { + HFI_MSG_ID(HFI_H2F_MSG_INIT), + HFI_MSG_ID(HFI_H2F_MSG_FW_VERSION), + HFI_MSG_ID(HFI_H2F_MSG_BW_TABLE), + HFI_MSG_ID(HFI_H2F_MSG_PERF_TABLE), + HFI_MSG_ID(HFI_H2F_MSG_TEST), + HFI_MSG_ID(HFI_H2F_MSG_START), + HFI_MSG_ID(HFI_H2F_MSG_CORE_FW_START), + HFI_MSG_ID(HFI_H2F_MSG_GX_BW_PERF_VOTE), + HFI_MSG_ID(HFI_H2F_MSG_PREPARE_SLUMBER), +}; + +static int a6xx_hfi_queue_read(struct a6xx_gmu *gmu, + struct a6xx_hfi_queue *queue, u32 *data, u32 dwords) +{ + struct a6xx_hfi_queue_header *header = queue->header; + u32 i, hdr, index = header->read_index; + + if (header->read_index == header->write_index) { + header->rx_request = 1; + return 0; + } + + hdr = queue->data[index]; + + queue->history[(queue->history_idx++) % HFI_HISTORY_SZ] = index; + + /* + * If we are to assume that the GMU firmware is in fact a rational actor + * and is programmed to not send us a larger response than we expect + * then we can also assume that if the header size is unexpectedly large + * that it is due to memory corruption and/or hardware failure. In this + * case the only reasonable course of action is to BUG() to help harden + * the failure. + */ + + BUG_ON(HFI_HEADER_SIZE(hdr) > dwords); + + for (i = 0; i < HFI_HEADER_SIZE(hdr); i++) { + data[i] = queue->data[index]; + index = (index + 1) % header->size; + } + + if (!gmu->legacy) + index = ALIGN(index, 4) % header->size; + + header->read_index = index; + return HFI_HEADER_SIZE(hdr); +} + +static int a6xx_hfi_queue_write(struct a6xx_gmu *gmu, + struct a6xx_hfi_queue *queue, u32 *data, u32 dwords) +{ + struct a6xx_hfi_queue_header *header = queue->header; + u32 i, space, index = header->write_index; + + spin_lock(&queue->lock); + + space = CIRC_SPACE(header->write_index, header->read_index, + header->size); + if (space < dwords) { + header->dropped++; + spin_unlock(&queue->lock); + return -ENOSPC; + } + + queue->history[(queue->history_idx++) % HFI_HISTORY_SZ] = index; + + for (i = 0; i < dwords; i++) { + queue->data[index] = data[i]; + index = (index + 1) % header->size; + } + + /* Cookify any non used data at the end of the write buffer */ + if (!gmu->legacy) { + for (; index % 4; index = (index + 1) % header->size) + queue->data[index] = 0xfafafafa; + } + + header->write_index = index; + spin_unlock(&queue->lock); + + gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, 0x01); + return 0; +} + +static int a6xx_hfi_wait_for_ack(struct a6xx_gmu *gmu, u32 id, u32 seqnum, + u32 *payload, u32 payload_size) +{ + struct a6xx_hfi_queue *queue = &gmu->queues[HFI_RESPONSE_QUEUE]; + u32 val; + int ret; + + /* Wait for a response */ + ret = gmu_poll_timeout(gmu, REG_A6XX_GMU_GMU2HOST_INTR_INFO, val, + val & A6XX_GMU_GMU2HOST_INTR_INFO_MSGQ, 100, 5000); + + if (ret) { + DRM_DEV_ERROR(gmu->dev, + "Message %s id %d timed out waiting for response\n", + a6xx_hfi_msg_id[id], seqnum); + return -ETIMEDOUT; + } + + /* Clear the interrupt */ + gmu_write(gmu, REG_A6XX_GMU_GMU2HOST_INTR_CLR, + A6XX_GMU_GMU2HOST_INTR_INFO_MSGQ); + + for (;;) { + struct a6xx_hfi_msg_response resp; + + /* Get the next packet */ + ret = a6xx_hfi_queue_read(gmu, queue, (u32 *) &resp, + sizeof(resp) >> 2); + + /* If the queue is empty our response never made it */ + if (!ret) { + DRM_DEV_ERROR(gmu->dev, + "The HFI response queue is unexpectedly empty\n"); + + return -ENOENT; + } + + if (HFI_HEADER_ID(resp.header) == HFI_F2H_MSG_ERROR) { + struct a6xx_hfi_msg_error *error = + (struct a6xx_hfi_msg_error *) &resp; + + DRM_DEV_ERROR(gmu->dev, "GMU firmware error %d\n", + error->code); + continue; + } + + if (seqnum != HFI_HEADER_SEQNUM(resp.ret_header)) { + DRM_DEV_ERROR(gmu->dev, + "Unexpected message id %d on the response queue\n", + HFI_HEADER_SEQNUM(resp.ret_header)); + continue; + } + + if (resp.error) { + DRM_DEV_ERROR(gmu->dev, + "Message %s id %d returned error %d\n", + a6xx_hfi_msg_id[id], seqnum, resp.error); + return -EINVAL; + } + + /* All is well, copy over the buffer */ + if (payload && payload_size) + memcpy(payload, resp.payload, + min_t(u32, payload_size, sizeof(resp.payload))); + + return 0; + } +} + +static int a6xx_hfi_send_msg(struct a6xx_gmu *gmu, int id, + void *data, u32 size, u32 *payload, u32 payload_size) +{ + struct a6xx_hfi_queue *queue = &gmu->queues[HFI_COMMAND_QUEUE]; + int ret, dwords = size >> 2; + u32 seqnum; + + seqnum = atomic_inc_return(&queue->seqnum) % 0xfff; + + /* First dword of the message is the message header - fill it in */ + *((u32 *) data) = (seqnum << 20) | (HFI_MSG_CMD << 16) | + (dwords << 8) | id; + + ret = a6xx_hfi_queue_write(gmu, queue, data, dwords); + if (ret) { + DRM_DEV_ERROR(gmu->dev, "Unable to send message %s id %d\n", + a6xx_hfi_msg_id[id], seqnum); + return ret; + } + + return a6xx_hfi_wait_for_ack(gmu, id, seqnum, payload, payload_size); +} + +static int a6xx_hfi_send_gmu_init(struct a6xx_gmu *gmu, int boot_state) +{ + struct a6xx_hfi_msg_gmu_init_cmd msg = { 0 }; + + msg.dbg_buffer_addr = (u32) gmu->debug.iova; + msg.dbg_buffer_size = (u32) gmu->debug.size; + msg.boot_state = boot_state; + + return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_INIT, &msg, sizeof(msg), + NULL, 0); +} + +static int a6xx_hfi_get_fw_version(struct a6xx_gmu *gmu, u32 *version) +{ + struct a6xx_hfi_msg_fw_version msg = { 0 }; + + /* Currently supporting version 1.10 */ + msg.supported_version = (1 << 28) | (1 << 19) | (1 << 17); + + return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_FW_VERSION, &msg, sizeof(msg), + version, sizeof(*version)); +} + +static int a6xx_hfi_send_perf_table_v1(struct a6xx_gmu *gmu) +{ + struct a6xx_hfi_msg_perf_table_v1 msg = { 0 }; + int i; + + msg.num_gpu_levels = gmu->nr_gpu_freqs; + msg.num_gmu_levels = gmu->nr_gmu_freqs; + + for (i = 0; i < gmu->nr_gpu_freqs; i++) { + msg.gx_votes[i].vote = gmu->gx_arc_votes[i]; + msg.gx_votes[i].freq = gmu->gpu_freqs[i] / 1000; + } + + for (i = 0; i < gmu->nr_gmu_freqs; i++) { + msg.cx_votes[i].vote = gmu->cx_arc_votes[i]; + msg.cx_votes[i].freq = gmu->gmu_freqs[i] / 1000; + } + + return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_PERF_TABLE, &msg, sizeof(msg), + NULL, 0); +} + +static int a6xx_hfi_send_perf_table(struct a6xx_gmu *gmu) +{ + struct a6xx_hfi_msg_perf_table msg = { 0 }; + int i; + + msg.num_gpu_levels = gmu->nr_gpu_freqs; + msg.num_gmu_levels = gmu->nr_gmu_freqs; + + for (i = 0; i < gmu->nr_gpu_freqs; i++) { + msg.gx_votes[i].vote = gmu->gx_arc_votes[i]; + msg.gx_votes[i].acd = 0xffffffff; + msg.gx_votes[i].freq = gmu->gpu_freqs[i] / 1000; + } + + for (i = 0; i < gmu->nr_gmu_freqs; i++) { + msg.cx_votes[i].vote = gmu->cx_arc_votes[i]; + msg.cx_votes[i].freq = gmu->gmu_freqs[i] / 1000; + } + + return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_PERF_TABLE, &msg, sizeof(msg), + NULL, 0); +} + +static void a618_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) +{ + /* Send a single "off" entry since the 618 GMU doesn't do bus scaling */ + msg->bw_level_num = 1; + + msg->ddr_cmds_num = 3; + msg->ddr_wait_bitmask = 0x01; + + msg->ddr_cmds_addrs[0] = 0x50000; + msg->ddr_cmds_addrs[1] = 0x5003c; + msg->ddr_cmds_addrs[2] = 0x5000c; + + msg->ddr_cmds_data[0][0] = 0x40000000; + msg->ddr_cmds_data[0][1] = 0x40000000; + msg->ddr_cmds_data[0][2] = 0x40000000; + + /* + * These are the CX (CNOC) votes - these are used by the GMU but the + * votes are known and fixed for the target + */ + msg->cnoc_cmds_num = 1; + msg->cnoc_wait_bitmask = 0x01; + + msg->cnoc_cmds_addrs[0] = 0x5007c; + msg->cnoc_cmds_data[0][0] = 0x40000000; + msg->cnoc_cmds_data[1][0] = 0x60000001; +} + +static void a619_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) +{ + msg->bw_level_num = 13; + + msg->ddr_cmds_num = 3; + msg->ddr_wait_bitmask = 0x0; + + msg->ddr_cmds_addrs[0] = 0x50000; + msg->ddr_cmds_addrs[1] = 0x50004; + msg->ddr_cmds_addrs[2] = 0x50080; + + msg->ddr_cmds_data[0][0] = 0x40000000; + msg->ddr_cmds_data[0][1] = 0x40000000; + msg->ddr_cmds_data[0][2] = 0x40000000; + msg->ddr_cmds_data[1][0] = 0x6000030c; + msg->ddr_cmds_data[1][1] = 0x600000db; + msg->ddr_cmds_data[1][2] = 0x60000008; + msg->ddr_cmds_data[2][0] = 0x60000618; + msg->ddr_cmds_data[2][1] = 0x600001b6; + msg->ddr_cmds_data[2][2] = 0x60000008; + msg->ddr_cmds_data[3][0] = 0x60000925; + msg->ddr_cmds_data[3][1] = 0x60000291; + msg->ddr_cmds_data[3][2] = 0x60000008; + msg->ddr_cmds_data[4][0] = 0x60000dc1; + msg->ddr_cmds_data[4][1] = 0x600003dc; + msg->ddr_cmds_data[4][2] = 0x60000008; + msg->ddr_cmds_data[5][0] = 0x600010ad; + msg->ddr_cmds_data[5][1] = 0x600004ae; + msg->ddr_cmds_data[5][2] = 0x60000008; + msg->ddr_cmds_data[6][0] = 0x600014c3; + msg->ddr_cmds_data[6][1] = 0x600005d4; + msg->ddr_cmds_data[6][2] = 0x60000008; + msg->ddr_cmds_data[7][0] = 0x6000176a; + msg->ddr_cmds_data[7][1] = 0x60000693; + msg->ddr_cmds_data[7][2] = 0x60000008; + msg->ddr_cmds_data[8][0] = 0x60001f01; + msg->ddr_cmds_data[8][1] = 0x600008b5; + msg->ddr_cmds_data[8][2] = 0x60000008; + msg->ddr_cmds_data[9][0] = 0x60002940; + msg->ddr_cmds_data[9][1] = 0x60000b95; + msg->ddr_cmds_data[9][2] = 0x60000008; + msg->ddr_cmds_data[10][0] = 0x60002f68; + msg->ddr_cmds_data[10][1] = 0x60000d50; + msg->ddr_cmds_data[10][2] = 0x60000008; + msg->ddr_cmds_data[11][0] = 0x60003700; + msg->ddr_cmds_data[11][1] = 0x60000f71; + msg->ddr_cmds_data[11][2] = 0x60000008; + msg->ddr_cmds_data[12][0] = 0x60003fce; + msg->ddr_cmds_data[12][1] = 0x600011ea; + msg->ddr_cmds_data[12][2] = 0x60000008; + + msg->cnoc_cmds_num = 1; + msg->cnoc_wait_bitmask = 0x0; + + msg->cnoc_cmds_addrs[0] = 0x50054; + + msg->cnoc_cmds_data[0][0] = 0x40000000; +} + +static void a640_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) +{ + /* + * Send a single "off" entry just to get things running + * TODO: bus scaling + */ + msg->bw_level_num = 1; + + msg->ddr_cmds_num = 3; + msg->ddr_wait_bitmask = 0x01; + + msg->ddr_cmds_addrs[0] = 0x50000; + msg->ddr_cmds_addrs[1] = 0x5003c; + msg->ddr_cmds_addrs[2] = 0x5000c; + + msg->ddr_cmds_data[0][0] = 0x40000000; + msg->ddr_cmds_data[0][1] = 0x40000000; + msg->ddr_cmds_data[0][2] = 0x40000000; + + /* + * These are the CX (CNOC) votes - these are used by the GMU but the + * votes are known and fixed for the target + */ + msg->cnoc_cmds_num = 3; + msg->cnoc_wait_bitmask = 0x01; + + msg->cnoc_cmds_addrs[0] = 0x50034; + msg->cnoc_cmds_addrs[1] = 0x5007c; + msg->cnoc_cmds_addrs[2] = 0x5004c; + + msg->cnoc_cmds_data[0][0] = 0x40000000; + msg->cnoc_cmds_data[0][1] = 0x00000000; + msg->cnoc_cmds_data[0][2] = 0x40000000; + + msg->cnoc_cmds_data[1][0] = 0x60000001; + msg->cnoc_cmds_data[1][1] = 0x20000001; + msg->cnoc_cmds_data[1][2] = 0x60000001; +} + +static void a650_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) +{ + /* + * Send a single "off" entry just to get things running + * TODO: bus scaling + */ + msg->bw_level_num = 1; + + msg->ddr_cmds_num = 3; + msg->ddr_wait_bitmask = 0x01; + + msg->ddr_cmds_addrs[0] = 0x50000; + msg->ddr_cmds_addrs[1] = 0x50004; + msg->ddr_cmds_addrs[2] = 0x5007c; + + msg->ddr_cmds_data[0][0] = 0x40000000; + msg->ddr_cmds_data[0][1] = 0x40000000; + msg->ddr_cmds_data[0][2] = 0x40000000; + + /* + * These are the CX (CNOC) votes - these are used by the GMU but the + * votes are known and fixed for the target + */ + msg->cnoc_cmds_num = 1; + msg->cnoc_wait_bitmask = 0x01; + + msg->cnoc_cmds_addrs[0] = 0x500a4; + msg->cnoc_cmds_data[0][0] = 0x40000000; + msg->cnoc_cmds_data[1][0] = 0x60000001; +} + +static void a660_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) +{ + /* + * Send a single "off" entry just to get things running + * TODO: bus scaling + */ + msg->bw_level_num = 1; + + msg->ddr_cmds_num = 3; + msg->ddr_wait_bitmask = 0x01; + + msg->ddr_cmds_addrs[0] = 0x50004; + msg->ddr_cmds_addrs[1] = 0x500a0; + msg->ddr_cmds_addrs[2] = 0x50000; + + msg->ddr_cmds_data[0][0] = 0x40000000; + msg->ddr_cmds_data[0][1] = 0x40000000; + msg->ddr_cmds_data[0][2] = 0x40000000; + + /* + * These are the CX (CNOC) votes - these are used by the GMU but the + * votes are known and fixed for the target + */ + msg->cnoc_cmds_num = 1; + msg->cnoc_wait_bitmask = 0x01; + + msg->cnoc_cmds_addrs[0] = 0x50070; + msg->cnoc_cmds_data[0][0] = 0x40000000; + msg->cnoc_cmds_data[1][0] = 0x60000001; +} + +static void adreno_7c3_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) +{ + /* + * Send a single "off" entry just to get things running + * TODO: bus scaling + */ + msg->bw_level_num = 1; + + msg->ddr_cmds_num = 3; + msg->ddr_wait_bitmask = 0x07; + + msg->ddr_cmds_addrs[0] = 0x50004; + msg->ddr_cmds_addrs[1] = 0x50000; + msg->ddr_cmds_addrs[2] = 0x50088; + + msg->ddr_cmds_data[0][0] = 0x40000000; + msg->ddr_cmds_data[0][1] = 0x40000000; + msg->ddr_cmds_data[0][2] = 0x40000000; + + /* + * These are the CX (CNOC) votes - these are used by the GMU but the + * votes are known and fixed for the target + */ + msg->cnoc_cmds_num = 1; + msg->cnoc_wait_bitmask = 0x01; + + msg->cnoc_cmds_addrs[0] = 0x5006c; + msg->cnoc_cmds_data[0][0] = 0x40000000; + msg->cnoc_cmds_data[1][0] = 0x60000001; +} +static void a6xx_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) +{ + /* Send a single "off" entry since the 630 GMU doesn't do bus scaling */ + msg->bw_level_num = 1; + + msg->ddr_cmds_num = 3; + msg->ddr_wait_bitmask = 0x07; + + msg->ddr_cmds_addrs[0] = 0x50000; + msg->ddr_cmds_addrs[1] = 0x5005c; + msg->ddr_cmds_addrs[2] = 0x5000c; + + msg->ddr_cmds_data[0][0] = 0x40000000; + msg->ddr_cmds_data[0][1] = 0x40000000; + msg->ddr_cmds_data[0][2] = 0x40000000; + + /* + * These are the CX (CNOC) votes. This is used but the values for the + * sdm845 GMU are known and fixed so we can hard code them. + */ + + msg->cnoc_cmds_num = 3; + msg->cnoc_wait_bitmask = 0x05; + + msg->cnoc_cmds_addrs[0] = 0x50034; + msg->cnoc_cmds_addrs[1] = 0x5007c; + msg->cnoc_cmds_addrs[2] = 0x5004c; + + msg->cnoc_cmds_data[0][0] = 0x40000000; + msg->cnoc_cmds_data[0][1] = 0x00000000; + msg->cnoc_cmds_data[0][2] = 0x40000000; + + msg->cnoc_cmds_data[1][0] = 0x60000001; + msg->cnoc_cmds_data[1][1] = 0x20000001; + msg->cnoc_cmds_data[1][2] = 0x60000001; +} + + +static int a6xx_hfi_send_bw_table(struct a6xx_gmu *gmu) +{ + struct a6xx_hfi_msg_bw_table msg = { 0 }; + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + + if (adreno_is_a618(adreno_gpu)) + a618_build_bw_table(&msg); + else if (adreno_is_a619(adreno_gpu)) + a619_build_bw_table(&msg); + else if (adreno_is_a640_family(adreno_gpu)) + a640_build_bw_table(&msg); + else if (adreno_is_a650(adreno_gpu)) + a650_build_bw_table(&msg); + else if (adreno_is_7c3(adreno_gpu)) + adreno_7c3_build_bw_table(&msg); + else if (adreno_is_a660(adreno_gpu)) + a660_build_bw_table(&msg); + else + a6xx_build_bw_table(&msg); + + return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_BW_TABLE, &msg, sizeof(msg), + NULL, 0); +} + +static int a6xx_hfi_send_test(struct a6xx_gmu *gmu) +{ + struct a6xx_hfi_msg_test msg = { 0 }; + + return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_TEST, &msg, sizeof(msg), + NULL, 0); +} + +static int a6xx_hfi_send_start(struct a6xx_gmu *gmu) +{ + struct a6xx_hfi_msg_start msg = { 0 }; + + return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_START, &msg, sizeof(msg), + NULL, 0); +} + +static int a6xx_hfi_send_core_fw_start(struct a6xx_gmu *gmu) +{ + struct a6xx_hfi_msg_core_fw_start msg = { 0 }; + + return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_CORE_FW_START, &msg, + sizeof(msg), NULL, 0); +} + +int a6xx_hfi_set_freq(struct a6xx_gmu *gmu, int index) +{ + struct a6xx_hfi_gx_bw_perf_vote_cmd msg = { 0 }; + + msg.ack_type = 1; /* blocking */ + msg.freq = index; + msg.bw = 0; /* TODO: bus scaling */ + + return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_GX_BW_PERF_VOTE, &msg, + sizeof(msg), NULL, 0); +} + +int a6xx_hfi_send_prep_slumber(struct a6xx_gmu *gmu) +{ + struct a6xx_hfi_prep_slumber_cmd msg = { 0 }; + + /* TODO: should freq and bw fields be non-zero ? */ + + return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_PREPARE_SLUMBER, &msg, + sizeof(msg), NULL, 0); +} + +static int a6xx_hfi_start_v1(struct a6xx_gmu *gmu, int boot_state) +{ + int ret; + + ret = a6xx_hfi_send_gmu_init(gmu, boot_state); + if (ret) + return ret; + + ret = a6xx_hfi_get_fw_version(gmu, NULL); + if (ret) + return ret; + + /* + * We have to get exchange version numbers per the sequence but at this + * point th kernel driver doesn't need to know the exact version of + * the GMU firmware + */ + + ret = a6xx_hfi_send_perf_table_v1(gmu); + if (ret) + return ret; + + ret = a6xx_hfi_send_bw_table(gmu); + if (ret) + return ret; + + /* + * Let the GMU know that there won't be any more HFI messages until next + * boot + */ + a6xx_hfi_send_test(gmu); + + return 0; +} + +int a6xx_hfi_start(struct a6xx_gmu *gmu, int boot_state) +{ + int ret; + + if (gmu->legacy) + return a6xx_hfi_start_v1(gmu, boot_state); + + + ret = a6xx_hfi_send_perf_table(gmu); + if (ret) + return ret; + + ret = a6xx_hfi_send_bw_table(gmu); + if (ret) + return ret; + + ret = a6xx_hfi_send_core_fw_start(gmu); + if (ret) + return ret; + + /* + * Downstream driver sends this in its "a6xx_hw_init" equivalent, + * but seems to be no harm in sending it here + */ + ret = a6xx_hfi_send_start(gmu); + if (ret) + return ret; + + return 0; +} + +void a6xx_hfi_stop(struct a6xx_gmu *gmu) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(gmu->queues); i++) { + struct a6xx_hfi_queue *queue = &gmu->queues[i]; + + if (!queue->header) + continue; + + if (queue->header->read_index != queue->header->write_index) + DRM_DEV_ERROR(gmu->dev, "HFI queue %d is not empty\n", i); + + queue->header->read_index = 0; + queue->header->write_index = 0; + + memset(&queue->history, 0xff, sizeof(queue->history)); + queue->history_idx = 0; + } +} + +static void a6xx_hfi_queue_init(struct a6xx_hfi_queue *queue, + struct a6xx_hfi_queue_header *header, void *virt, u64 iova, + u32 id) +{ + spin_lock_init(&queue->lock); + queue->header = header; + queue->data = virt; + atomic_set(&queue->seqnum, 0); + + memset(&queue->history, 0xff, sizeof(queue->history)); + queue->history_idx = 0; + + /* Set up the shared memory header */ + header->iova = iova; + header->type = 10 << 8 | id; + header->status = 1; + header->size = SZ_4K >> 2; + header->msg_size = 0; + header->dropped = 0; + header->rx_watermark = 1; + header->tx_watermark = 1; + header->rx_request = 1; + header->tx_request = 0; + header->read_index = 0; + header->write_index = 0; +} + +void a6xx_hfi_init(struct a6xx_gmu *gmu) +{ + struct a6xx_gmu_bo *hfi = &gmu->hfi; + struct a6xx_hfi_queue_table_header *table = hfi->virt; + struct a6xx_hfi_queue_header *headers = hfi->virt + sizeof(*table); + u64 offset; + int table_size; + + /* + * The table size is the size of the table header plus all of the queue + * headers + */ + table_size = sizeof(*table); + table_size += (ARRAY_SIZE(gmu->queues) * + sizeof(struct a6xx_hfi_queue_header)); + + table->version = 0; + table->size = table_size; + /* First queue header is located immediately after the table header */ + table->qhdr0_offset = sizeof(*table) >> 2; + table->qhdr_size = sizeof(struct a6xx_hfi_queue_header) >> 2; + table->num_queues = ARRAY_SIZE(gmu->queues); + table->active_queues = ARRAY_SIZE(gmu->queues); + + /* Command queue */ + offset = SZ_4K; + a6xx_hfi_queue_init(&gmu->queues[0], &headers[0], hfi->virt + offset, + hfi->iova + offset, 0); + + /* GMU response queue */ + offset += SZ_4K; + a6xx_hfi_queue_init(&gmu->queues[1], &headers[1], hfi->virt + offset, + hfi->iova + offset, gmu->legacy ? 4 : 1); +} -- cgit v1.2.3