From 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Tue, 21 Feb 2023 18:24:12 -0800 Subject: Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next Pull networking updates from Jakub Kicinski: "Core: - Add dedicated kmem_cache for typical/small skb->head, avoid having to access struct page at kfree time, and improve memory use. - Introduce sysctl to set default RPS configuration for new netdevs. - Define Netlink protocol specification format which can be used to describe messages used by each family and auto-generate parsers. Add tools for generating kernel data structures and uAPI headers. - Expose all net/core sysctls inside netns. - Remove 4s sleep in netpoll if carrier is instantly detected on boot. - Add configurable limit of MDB entries per port, and port-vlan. - Continue populating drop reasons throughout the stack. - Retire a handful of legacy Qdiscs and classifiers. Protocols: - Support IPv4 big TCP (TSO frames larger than 64kB). - Add IP_LOCAL_PORT_RANGE socket option, to control local port range on socket by socket basis. - Track and report in procfs number of MPTCP sockets used. - Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path manager. - IPv6: don't check net.ipv6.route.max_size and rely on garbage collection to free memory (similarly to IPv4). - Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986). - ICMP: add per-rate limit counters. - Add support for user scanning requests in ieee802154. - Remove static WEP support. - Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate reporting. - WiFi 7 EHT channel puncturing support (client & AP). BPF: - Add a rbtree data structure following the "next-gen data structure" precedent set by recently added linked list, that is, by using kfunc + kptr instead of adding a new BPF map type. - Expose XDP hints via kfuncs with initial support for RX hash and timestamp metadata. - Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to better support decap on GRE tunnel devices not operating in collect metadata. - Improve x86 JIT's codegen for PROBE_MEM runtime error checks. - Remove the need for trace_printk_lock for bpf_trace_printk and bpf_trace_vprintk helpers. - Extend libbpf's bpf_tracing.h support for tracing arguments of kprobes/uprobes and syscall as a special case. - Significantly reduce the search time for module symbols by livepatch and BPF. - Enable cpumasks to be used as kptrs, which is useful for tracing programs tracking which tasks end up running on which CPUs in different time intervals. - Add support for BPF trampoline on s390x and riscv64. - Add capability to export the XDP features supported by the NIC. - Add __bpf_kfunc tag for marking kernel functions as kfuncs. - Add cgroup.memory=nobpf kernel parameter option to disable BPF memory accounting for container environments. Netfilter: - Remove the CLUSTERIP target. It has been marked as obsolete for years, and we still have WARN splats wrt races of the out-of-band /proc interface installed by this target. - Add 'destroy' commands to nf_tables. They are identical to the existing 'delete' commands, but do not return an error if the referenced object (set, chain, rule...) did not exist. Driver API: - Improve cpumask_local_spread() locality to help NICs set the right IRQ affinity on AMD platforms. - Separate C22 and C45 MDIO bus transactions more clearly. - Introduce new DCB table to control DSCP rewrite on egress. - Support configuration of Physical Layer Collision Avoidance (PLCA) Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of shared medium Ethernet. - Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing preemption of low priority frames by high priority frames. - Add support for controlling MACSec offload using netlink SET. - Rework devlink instance refcounts to allow registration and de-registration under the instance lock. Split the code into multiple files, drop some of the unnecessarily granular locks and factor out common parts of netlink operation handling. - Add TX frame aggregation parameters (for USB drivers). - Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning messages with notifications for debug. - Allow offloading of UDP NEW connections via act_ct. - Add support for per action HW stats in TC. - Support hardware miss to TC action (continue processing in SW from a specific point in the action chain). - Warn if old Wireless Extension user space interface is used with modern cfg80211/mac80211 drivers. Do not support Wireless Extensions for Wi-Fi 7 devices at all. Everyone should switch to using nl80211 interface instead. - Improve the CAN bit timing configuration. Use extack to return error messages directly to user space, update the SJW handling, including the definition of a new default value that will benefit CAN-FD controllers, by increasing their oscillator tolerance. New hardware / drivers: - Ethernet: - nVidia BlueField-3 support (control traffic driver) - Ethernet support for imx93 SoCs - Motorcomm yt8531 gigabit Ethernet PHY - onsemi NCN26000 10BASE-T1S PHY (with support for PLCA) - Microchip LAN8841 PHY (incl. cable diagnostics and PTP) - Amlogic gxl MDIO mux - WiFi: - RealTek RTL8188EU (rtl8xxxu) - Qualcomm Wi-Fi 7 devices (ath12k) - CAN: - Renesas R-Car V4H Drivers: - Bluetooth: - Set Per Platform Antenna Gain (PPAG) for Intel controllers. - Ethernet NICs: - Intel (1G, igc): - support TSN / Qbv / packet scheduling features of i226 model - Intel (100G, ice): - use GNSS subsystem instead of TTY - multi-buffer XDP support - extend support for GPIO pins to E823 devices - nVidia/Mellanox: - update the shared buffer configuration on PFC commands - implement PTP adjphase function for HW offset control - TC support for Geneve and GRE with VF tunnel offload - more efficient crypto key management method - multi-port eswitch support - Netronome/Corigine: - add DCB IEEE support - support IPsec offloading for NFP3800 - Freescale/NXP (enetc): - support XDP_REDIRECT for XDP non-linear buffers - improve reconfig, avoid link flap and waiting for idle - support MAC Merge layer - Other NICs: - sfc/ef100: add basic devlink support for ef100 - ionic: rx_push mode operation (writing descriptors via MMIO) - bnxt: use the auxiliary bus abstraction for RDMA - r8169: disable ASPM and reset bus in case of tx timeout - cpsw: support QSGMII mode for J721e CPSW9G - cpts: support pulse-per-second output - ngbe: add an mdio bus driver - usbnet: optimize usbnet_bh() by avoiding unnecessary queuing - r8152: handle devices with FW with NCM support - amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation - virtio-net: support multi buffer XDP - virtio/vsock: replace virtio_vsock_pkt with sk_buff - tsnep: XDP support - Ethernet high-speed switches: - nVidia/Mellanox (mlxsw): - add support for latency TLV (in FW control messages) - Microchip (sparx5): - separate explicit and implicit traffic forwarding rules, make the implicit rules always active - add support for egress DSCP rewrite - IS0 VCAP support (Ingress Classification) - IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS etc.) - ES2 VCAP support (Egress Access Control) - support for Per-Stream Filtering and Policing (802.1Q, 8.6.5.1) - Ethernet embedded switches: - Marvell (mv88e6xxx): - add MAB (port auth) offload support - enable PTP receive for mv88e6390 - NXP (ocelot): - support MAC Merge layer - support for the the vsc7512 internal copper phys - Microchip: - lan9303: convert to PHYLINK - lan966x: support TC flower filter statistics - lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x - lan937x: support Credit Based Shaper configuration - ksz9477: support Energy Efficient Ethernet - other: - qca8k: convert to regmap read/write API, use bulk operations - rswitch: Improve TX timestamp accuracy - Intel WiFi (iwlwifi): - EHT (Wi-Fi 7) rate reporting - STEP equalizer support: transfer some STEP (connection to radio on platforms with integrated wifi) related parameters from the BIOS to the firmware. - Qualcomm 802.11ax WiFi (ath11k): - IPQ5018 support - Fine Timing Measurement (FTM) responder role support - channel 177 support - MediaTek WiFi (mt76): - per-PHY LED support - mt7996: EHT (Wi-Fi 7) support - Wireless Ethernet Dispatch (WED) reset support - switch to using page pool allocator - RealTek WiFi (rtw89): - support new version of Bluetooth co-existance - Mobile: - rmnet: support TX aggregation" * tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits) page_pool: add a comment explaining the fragment counter usage net: ethtool: fix __ethtool_dev_mm_supported() implementation ethtool: pse-pd: Fix double word in comments xsk: add linux/vmalloc.h to xsk.c sefltests: netdevsim: wait for devlink instance after netns removal selftest: fib_tests: Always cleanup before exit net/mlx5e: Align IPsec ASO result memory to be as required by hardware net/mlx5e: TC, Set CT miss to the specific ct action instance net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG net/mlx5: Refactor tc miss handling to a single function net/mlx5: Kconfig: Make tc offload depend on tc skb extension net/sched: flower: Support hardware miss to tc action net/sched: flower: Move filter handle initialization earlier net/sched: cls_api: Support hardware miss to tc action net/sched: Rename user cookie and act cookie sfc: fix builds without CONFIG_RTC_LIB sfc: clean up some inconsistent indentings net/mlx4_en: Introduce flexible array to silence overflow warning net: lan966x: Fix possible deadlock inside PTP net/ulp: Remove redundant ->clone() test in inet_clone_ulp(). ... --- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 433 +++++++++++++++++++++++++++++++ 1 file changed, 433 insertions(+) create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h (limited to 'drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h') diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h new file mode 100644 index 000000000..3989e755a --- /dev/null +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h @@ -0,0 +1,433 @@ +/* + * Copyright 2016 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Christian König + */ +#ifndef __AMDGPU_RING_H__ +#define __AMDGPU_RING_H__ + +#include +#include +#include + +struct amdgpu_device; +struct amdgpu_ring; +struct amdgpu_ib; +struct amdgpu_cs_parser; +struct amdgpu_job; +struct amdgpu_vm; + +/* max number of rings */ +#define AMDGPU_MAX_RINGS 28 +#define AMDGPU_MAX_HWIP_RINGS 8 +#define AMDGPU_MAX_GFX_RINGS 2 +#define AMDGPU_MAX_SW_GFX_RINGS 2 +#define AMDGPU_MAX_COMPUTE_RINGS 8 +#define AMDGPU_MAX_VCE_RINGS 3 +#define AMDGPU_MAX_UVD_ENC_RINGS 2 + +enum amdgpu_ring_priority_level { + AMDGPU_RING_PRIO_0, + AMDGPU_RING_PRIO_1, + AMDGPU_RING_PRIO_DEFAULT = 1, + AMDGPU_RING_PRIO_2, + AMDGPU_RING_PRIO_MAX +}; + +/* some special values for the owner field */ +#define AMDGPU_FENCE_OWNER_UNDEFINED ((void *)0ul) +#define AMDGPU_FENCE_OWNER_VM ((void *)1ul) +#define AMDGPU_FENCE_OWNER_KFD ((void *)2ul) + +#define AMDGPU_FENCE_FLAG_64BIT (1 << 0) +#define AMDGPU_FENCE_FLAG_INT (1 << 1) +#define AMDGPU_FENCE_FLAG_TC_WB_ONLY (1 << 2) +#define AMDGPU_FENCE_FLAG_EXEC (1 << 3) + +#define to_amdgpu_ring(s) container_of((s), struct amdgpu_ring, sched) + +#define AMDGPU_IB_POOL_SIZE (1024 * 1024) + +enum amdgpu_ring_type { + AMDGPU_RING_TYPE_GFX = AMDGPU_HW_IP_GFX, + AMDGPU_RING_TYPE_COMPUTE = AMDGPU_HW_IP_COMPUTE, + AMDGPU_RING_TYPE_SDMA = AMDGPU_HW_IP_DMA, + AMDGPU_RING_TYPE_UVD = AMDGPU_HW_IP_UVD, + AMDGPU_RING_TYPE_VCE = AMDGPU_HW_IP_VCE, + AMDGPU_RING_TYPE_UVD_ENC = AMDGPU_HW_IP_UVD_ENC, + AMDGPU_RING_TYPE_VCN_DEC = AMDGPU_HW_IP_VCN_DEC, + AMDGPU_RING_TYPE_VCN_ENC = AMDGPU_HW_IP_VCN_ENC, + AMDGPU_RING_TYPE_VCN_JPEG = AMDGPU_HW_IP_VCN_JPEG, + AMDGPU_RING_TYPE_KIQ, + AMDGPU_RING_TYPE_MES +}; + +enum amdgpu_ib_pool_type { + /* Normal submissions to the top of the pipeline. */ + AMDGPU_IB_POOL_DELAYED, + /* Immediate submissions to the bottom of the pipeline. */ + AMDGPU_IB_POOL_IMMEDIATE, + /* Direct submission to the ring buffer during init and reset. */ + AMDGPU_IB_POOL_DIRECT, + + AMDGPU_IB_POOL_MAX +}; + +struct amdgpu_ib { + struct amdgpu_sa_bo *sa_bo; + uint32_t length_dw; + uint64_t gpu_addr; + uint32_t *ptr; + uint32_t flags; +}; + +struct amdgpu_sched { + u32 num_scheds; + struct drm_gpu_scheduler *sched[AMDGPU_MAX_HWIP_RINGS]; +}; + +/* + * Fences. + */ +struct amdgpu_fence_driver { + uint64_t gpu_addr; + volatile uint32_t *cpu_addr; + /* sync_seq is protected by ring emission lock */ + uint32_t sync_seq; + atomic_t last_seq; + bool initialized; + struct amdgpu_irq_src *irq_src; + unsigned irq_type; + struct timer_list fallback_timer; + unsigned num_fences_mask; + spinlock_t lock; + struct dma_fence **fences; +}; + +extern const struct drm_sched_backend_ops amdgpu_sched_ops; + +void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring); +void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring); + +int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring); +int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring, + struct amdgpu_irq_src *irq_src, + unsigned irq_type); +void amdgpu_fence_driver_hw_init(struct amdgpu_device *adev); +void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev); +int amdgpu_fence_driver_sw_init(struct amdgpu_device *adev); +void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev); +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **fence, struct amdgpu_job *job, + unsigned flags); +int amdgpu_fence_emit_polling(struct amdgpu_ring *ring, uint32_t *s, + uint32_t timeout); +bool amdgpu_fence_process(struct amdgpu_ring *ring); +int amdgpu_fence_wait_empty(struct amdgpu_ring *ring); +signed long amdgpu_fence_wait_polling(struct amdgpu_ring *ring, + uint32_t wait_seq, + signed long timeout); +unsigned amdgpu_fence_count_emitted(struct amdgpu_ring *ring); + +void amdgpu_fence_driver_isr_toggle(struct amdgpu_device *adev, bool stop); + +u64 amdgpu_fence_last_unsignaled_time_us(struct amdgpu_ring *ring); +void amdgpu_fence_update_start_timestamp(struct amdgpu_ring *ring, uint32_t seq, + ktime_t timestamp); + +/* + * Rings. + */ + +/* provided by hw blocks that expose a ring buffer for commands */ +struct amdgpu_ring_funcs { + enum amdgpu_ring_type type; + uint32_t align_mask; + u32 nop; + bool support_64bit_ptrs; + bool no_user_fence; + bool secure_submission_supported; + unsigned vmhub; + unsigned extra_dw; + + /* ring read/write ptr handling */ + u64 (*get_rptr)(struct amdgpu_ring *ring); + u64 (*get_wptr)(struct amdgpu_ring *ring); + void (*set_wptr)(struct amdgpu_ring *ring); + /* validating and patching of IBs */ + int (*parse_cs)(struct amdgpu_cs_parser *p, + struct amdgpu_job *job, + struct amdgpu_ib *ib); + int (*patch_cs_in_place)(struct amdgpu_cs_parser *p, + struct amdgpu_job *job, + struct amdgpu_ib *ib); + /* constants to calculate how many DW are needed for an emit */ + unsigned emit_frame_size; + unsigned emit_ib_size; + /* command emit functions */ + void (*emit_ib)(struct amdgpu_ring *ring, + struct amdgpu_job *job, + struct amdgpu_ib *ib, + uint32_t flags); + void (*emit_fence)(struct amdgpu_ring *ring, uint64_t addr, + uint64_t seq, unsigned flags); + void (*emit_pipeline_sync)(struct amdgpu_ring *ring); + void (*emit_vm_flush)(struct amdgpu_ring *ring, unsigned vmid, + uint64_t pd_addr); + void (*emit_hdp_flush)(struct amdgpu_ring *ring); + void (*emit_gds_switch)(struct amdgpu_ring *ring, uint32_t vmid, + uint32_t gds_base, uint32_t gds_size, + uint32_t gws_base, uint32_t gws_size, + uint32_t oa_base, uint32_t oa_size); + /* testing functions */ + int (*test_ring)(struct amdgpu_ring *ring); + int (*test_ib)(struct amdgpu_ring *ring, long timeout); + /* insert NOP packets */ + void (*insert_nop)(struct amdgpu_ring *ring, uint32_t count); + void (*insert_start)(struct amdgpu_ring *ring); + void (*insert_end)(struct amdgpu_ring *ring); + /* pad the indirect buffer to the necessary number of dw */ + void (*pad_ib)(struct amdgpu_ring *ring, struct amdgpu_ib *ib); + unsigned (*init_cond_exec)(struct amdgpu_ring *ring); + void (*patch_cond_exec)(struct amdgpu_ring *ring, unsigned offset); + /* note usage for clock and power gating */ + void (*begin_use)(struct amdgpu_ring *ring); + void (*end_use)(struct amdgpu_ring *ring); + void (*emit_switch_buffer) (struct amdgpu_ring *ring); + void (*emit_cntxcntl) (struct amdgpu_ring *ring, uint32_t flags); + void (*emit_rreg)(struct amdgpu_ring *ring, uint32_t reg, + uint32_t reg_val_offs); + void (*emit_wreg)(struct amdgpu_ring *ring, uint32_t reg, uint32_t val); + void (*emit_reg_wait)(struct amdgpu_ring *ring, uint32_t reg, + uint32_t val, uint32_t mask); + void (*emit_reg_write_reg_wait)(struct amdgpu_ring *ring, + uint32_t reg0, uint32_t reg1, + uint32_t ref, uint32_t mask); + void (*emit_frame_cntl)(struct amdgpu_ring *ring, bool start, + bool secure); + /* Try to soft recover the ring to make the fence signal */ + void (*soft_recovery)(struct amdgpu_ring *ring, unsigned vmid); + int (*preempt_ib)(struct amdgpu_ring *ring); + void (*emit_mem_sync)(struct amdgpu_ring *ring); + void (*emit_wave_limit)(struct amdgpu_ring *ring, bool enable); +}; + +struct amdgpu_ring { + struct amdgpu_device *adev; + const struct amdgpu_ring_funcs *funcs; + struct amdgpu_fence_driver fence_drv; + struct drm_gpu_scheduler sched; + + struct amdgpu_bo *ring_obj; + volatile uint32_t *ring; + unsigned rptr_offs; + u64 rptr_gpu_addr; + volatile u32 *rptr_cpu_addr; + u64 wptr; + u64 wptr_old; + unsigned ring_size; + unsigned max_dw; + int count_dw; + uint64_t gpu_addr; + uint64_t ptr_mask; + uint32_t buf_mask; + u32 idx; + u32 me; + u32 pipe; + u32 queue; + struct amdgpu_bo *mqd_obj; + uint64_t mqd_gpu_addr; + void *mqd_ptr; + uint64_t eop_gpu_addr; + u32 doorbell_index; + bool use_doorbell; + bool use_pollmem; + unsigned wptr_offs; + u64 wptr_gpu_addr; + volatile u32 *wptr_cpu_addr; + unsigned fence_offs; + u64 fence_gpu_addr; + volatile u32 *fence_cpu_addr; + uint64_t current_ctx; + char name[16]; + u32 trail_seq; + unsigned trail_fence_offs; + u64 trail_fence_gpu_addr; + volatile u32 *trail_fence_cpu_addr; + unsigned cond_exe_offs; + u64 cond_exe_gpu_addr; + volatile u32 *cond_exe_cpu_addr; + unsigned vm_inv_eng; + struct dma_fence *vmid_wait; + bool has_compute_vm_bug; + bool no_scheduler; + int hw_prio; + unsigned num_hw_submission; + atomic_t *sched_score; + + /* used for mes */ + bool is_mes_queue; + uint32_t hw_queue_id; + struct amdgpu_mes_ctx_data *mes_ctx; + + bool is_sw_ring; + unsigned int entry_index; + +}; + +#define amdgpu_ring_parse_cs(r, p, job, ib) ((r)->funcs->parse_cs((p), (job), (ib))) +#define amdgpu_ring_patch_cs_in_place(r, p, job, ib) ((r)->funcs->patch_cs_in_place((p), (job), (ib))) +#define amdgpu_ring_test_ring(r) (r)->funcs->test_ring((r)) +#define amdgpu_ring_test_ib(r, t) ((r)->funcs->test_ib ? (r)->funcs->test_ib((r), (t)) : 0) +#define amdgpu_ring_get_rptr(r) (r)->funcs->get_rptr((r)) +#define amdgpu_ring_get_wptr(r) (r)->funcs->get_wptr((r)) +#define amdgpu_ring_set_wptr(r) (r)->funcs->set_wptr((r)) +#define amdgpu_ring_emit_ib(r, job, ib, flags) ((r)->funcs->emit_ib((r), (job), (ib), (flags))) +#define amdgpu_ring_emit_pipeline_sync(r) (r)->funcs->emit_pipeline_sync((r)) +#define amdgpu_ring_emit_vm_flush(r, vmid, addr) (r)->funcs->emit_vm_flush((r), (vmid), (addr)) +#define amdgpu_ring_emit_fence(r, addr, seq, flags) (r)->funcs->emit_fence((r), (addr), (seq), (flags)) +#define amdgpu_ring_emit_gds_switch(r, v, db, ds, wb, ws, ab, as) (r)->funcs->emit_gds_switch((r), (v), (db), (ds), (wb), (ws), (ab), (as)) +#define amdgpu_ring_emit_hdp_flush(r) (r)->funcs->emit_hdp_flush((r)) +#define amdgpu_ring_emit_switch_buffer(r) (r)->funcs->emit_switch_buffer((r)) +#define amdgpu_ring_emit_cntxcntl(r, d) (r)->funcs->emit_cntxcntl((r), (d)) +#define amdgpu_ring_emit_rreg(r, d, o) (r)->funcs->emit_rreg((r), (d), (o)) +#define amdgpu_ring_emit_wreg(r, d, v) (r)->funcs->emit_wreg((r), (d), (v)) +#define amdgpu_ring_emit_reg_wait(r, d, v, m) (r)->funcs->emit_reg_wait((r), (d), (v), (m)) +#define amdgpu_ring_emit_reg_write_reg_wait(r, d0, d1, v, m) (r)->funcs->emit_reg_write_reg_wait((r), (d0), (d1), (v), (m)) +#define amdgpu_ring_emit_frame_cntl(r, b, s) (r)->funcs->emit_frame_cntl((r), (b), (s)) +#define amdgpu_ring_pad_ib(r, ib) ((r)->funcs->pad_ib((r), (ib))) +#define amdgpu_ring_init_cond_exec(r) (r)->funcs->init_cond_exec((r)) +#define amdgpu_ring_patch_cond_exec(r,o) (r)->funcs->patch_cond_exec((r),(o)) +#define amdgpu_ring_preempt_ib(r) (r)->funcs->preempt_ib(r) + +int amdgpu_ring_alloc(struct amdgpu_ring *ring, unsigned ndw); +void amdgpu_ring_ib_begin(struct amdgpu_ring *ring); +void amdgpu_ring_ib_end(struct amdgpu_ring *ring); + +void amdgpu_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count); +void amdgpu_ring_generic_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib); +void amdgpu_ring_commit(struct amdgpu_ring *ring); +void amdgpu_ring_undo(struct amdgpu_ring *ring); +int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring, + unsigned int max_dw, struct amdgpu_irq_src *irq_src, + unsigned int irq_type, unsigned int hw_prio, + atomic_t *sched_score); +void amdgpu_ring_fini(struct amdgpu_ring *ring); +void amdgpu_ring_emit_reg_write_reg_wait_helper(struct amdgpu_ring *ring, + uint32_t reg0, uint32_t val0, + uint32_t reg1, uint32_t val1); +bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid, + struct dma_fence *fence); + +static inline void amdgpu_ring_set_preempt_cond_exec(struct amdgpu_ring *ring, + bool cond_exec) +{ + *ring->cond_exe_cpu_addr = cond_exec; +} + +static inline void amdgpu_ring_clear_ring(struct amdgpu_ring *ring) +{ + int i = 0; + while (i <= ring->buf_mask) + ring->ring[i++] = ring->funcs->nop; + +} + +static inline void amdgpu_ring_write(struct amdgpu_ring *ring, uint32_t v) +{ + if (ring->count_dw <= 0) + DRM_ERROR("amdgpu: writing more dwords to the ring than expected!\n"); + ring->ring[ring->wptr++ & ring->buf_mask] = v; + ring->wptr &= ring->ptr_mask; + ring->count_dw--; +} + +static inline void amdgpu_ring_write_multiple(struct amdgpu_ring *ring, + void *src, int count_dw) +{ + unsigned occupied, chunk1, chunk2; + void *dst; + + if (unlikely(ring->count_dw < count_dw)) + DRM_ERROR("amdgpu: writing more dwords to the ring than expected!\n"); + + occupied = ring->wptr & ring->buf_mask; + dst = (void *)&ring->ring[occupied]; + chunk1 = ring->buf_mask + 1 - occupied; + chunk1 = (chunk1 >= count_dw) ? count_dw: chunk1; + chunk2 = count_dw - chunk1; + chunk1 <<= 2; + chunk2 <<= 2; + + if (chunk1) + memcpy(dst, src, chunk1); + + if (chunk2) { + src += chunk1; + dst = (void *)ring->ring; + memcpy(dst, src, chunk2); + } + + ring->wptr += count_dw; + ring->wptr &= ring->ptr_mask; + ring->count_dw -= count_dw; +} + +#define amdgpu_mes_ctx_get_offs_gpu_addr(ring, offset) \ + (ring->is_mes_queue && ring->mes_ctx ? \ + (ring->mes_ctx->meta_data_gpu_addr + offset) : 0) + +#define amdgpu_mes_ctx_get_offs_cpu_addr(ring, offset) \ + (ring->is_mes_queue && ring->mes_ctx ? \ + (void *)((uint8_t *)(ring->mes_ctx->meta_data_ptr) + offset) : \ + NULL) + +int amdgpu_ring_test_helper(struct amdgpu_ring *ring); + +void amdgpu_debugfs_ring_init(struct amdgpu_device *adev, + struct amdgpu_ring *ring); + +int amdgpu_ring_init_mqd(struct amdgpu_ring *ring); + +static inline u32 amdgpu_ib_get_value(struct amdgpu_ib *ib, int idx) +{ + return ib->ptr[idx]; +} + +static inline void amdgpu_ib_set_value(struct amdgpu_ib *ib, int idx, + uint32_t value) +{ + ib->ptr[idx] = value; +} + +int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm, + unsigned size, + enum amdgpu_ib_pool_type pool, + struct amdgpu_ib *ib); +void amdgpu_ib_free(struct amdgpu_device *adev, struct amdgpu_ib *ib, + struct dma_fence *f); +int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs, + struct amdgpu_ib *ibs, struct amdgpu_job *job, + struct dma_fence **f); +int amdgpu_ib_pool_init(struct amdgpu_device *adev); +void amdgpu_ib_pool_fini(struct amdgpu_device *adev); +int amdgpu_ib_ring_tests(struct amdgpu_device *adev); + +#endif -- cgit v1.2.3