diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c')
-rw-r--r-- | drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c | 1193 |
1 files changed, 1193 insertions, 0 deletions
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c new file mode 100644 index 000000000..d3cc5ec46 --- /dev/null +++ b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c @@ -0,0 +1,1193 @@ +/* + * Copyright 2012-16 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#include "core_types.h" +#include "link_encoder.h" +#include "dce_dmcu.h" +#include "dm_services.h" +#include "reg_helper.h" +#include "fixed31_32.h" +#include "dc.h" + +#define TO_DCE_DMCU(dmcu)\ + container_of(dmcu, struct dce_dmcu, base) + +#define REG(reg) \ + (dmcu_dce->regs->reg) + +#undef FN +#define FN(reg_name, field_name) \ + dmcu_dce->dmcu_shift->field_name, dmcu_dce->dmcu_mask->field_name + +#define CTX \ + dmcu_dce->base.ctx + +/* PSR related commands */ +#define PSR_ENABLE 0x20 +#define PSR_EXIT 0x21 +#define PSR_SET 0x23 +#define PSR_SET_WAITLOOP 0x31 +#define MCP_INIT_DMCU 0x88 +#define MCP_INIT_IRAM 0x89 +#define MCP_SYNC_PHY_LOCK 0x90 +#define MCP_SYNC_PHY_UNLOCK 0x91 +#define MCP_BL_SET_PWM_FRAC 0x6A /* Enable or disable Fractional PWM */ +#define CRC_WIN_NOTIFY 0x92 +#define CRC_STOP_UPDATE 0x93 +#define MCP_SEND_EDID_CEA 0xA0 +#define EDID_CEA_CMD_ACK 1 +#define EDID_CEA_CMD_NACK 2 +#define MASTER_COMM_CNTL_REG__MASTER_COMM_INTERRUPT_MASK 0x00000001L + +// PSP FW version +#define mmMP0_SMN_C2PMSG_58 0x1607A + +//Register access policy version +#define mmMP0_SMN_C2PMSG_91 0x1609B + +static const uint32_t abm_gain_stepsize = 0x0060; + +static bool dce_dmcu_init(struct dmcu *dmcu) +{ + // Do nothing + return true; +} + +static bool dce_dmcu_load_iram(struct dmcu *dmcu, + unsigned int start_offset, + const char *src, + unsigned int bytes) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + unsigned int count = 0; + + /* Enable write access to IRAM */ + REG_UPDATE_2(DMCU_RAM_ACCESS_CTRL, + IRAM_HOST_ACCESS_EN, 1, + IRAM_WR_ADDR_AUTO_INC, 1); + + REG_WAIT(DCI_MEM_PWR_STATUS, DMCU_IRAM_MEM_PWR_STATE, 0, 2, 10); + + REG_WRITE(DMCU_IRAM_WR_CTRL, start_offset); + + for (count = 0; count < bytes; count++) + REG_WRITE(DMCU_IRAM_WR_DATA, src[count]); + + /* Disable write access to IRAM to allow dynamic sleep state */ + REG_UPDATE_2(DMCU_RAM_ACCESS_CTRL, + IRAM_HOST_ACCESS_EN, 0, + IRAM_WR_ADDR_AUTO_INC, 0); + + return true; +} + +static void dce_get_dmcu_psr_state(struct dmcu *dmcu, enum dc_psr_state *state) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + + uint32_t psr_state_offset = 0xf0; + + /* Enable write access to IRAM */ + REG_UPDATE(DMCU_RAM_ACCESS_CTRL, IRAM_HOST_ACCESS_EN, 1); + + REG_WAIT(DCI_MEM_PWR_STATUS, DMCU_IRAM_MEM_PWR_STATE, 0, 2, 10); + + /* Write address to IRAM_RD_ADDR in DMCU_IRAM_RD_CTRL */ + REG_WRITE(DMCU_IRAM_RD_CTRL, psr_state_offset); + + /* Read data from IRAM_RD_DATA in DMCU_IRAM_RD_DATA*/ + *state = (enum dc_psr_state)REG_READ(DMCU_IRAM_RD_DATA); + + /* Disable write access to IRAM after finished using IRAM + * in order to allow dynamic sleep state + */ + REG_UPDATE(DMCU_RAM_ACCESS_CTRL, IRAM_HOST_ACCESS_EN, 0); +} + +static void dce_dmcu_set_psr_enable(struct dmcu *dmcu, bool enable, bool wait) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + unsigned int dmcu_max_retry_on_wait_reg_ready = 801; + unsigned int dmcu_wait_reg_ready_interval = 100; + + unsigned int retryCount; + enum dc_psr_state state = PSR_STATE0; + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, + dmcu_wait_reg_ready_interval, + dmcu_max_retry_on_wait_reg_ready); + + /* setDMCUParam_Cmd */ + if (enable) + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, + PSR_ENABLE); + else + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, + PSR_EXIT); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + if (wait == true) { + for (retryCount = 0; retryCount <= 100; retryCount++) { + dce_get_dmcu_psr_state(dmcu, &state); + if (enable) { + if (state != PSR_STATE0) + break; + } else { + if (state == PSR_STATE0) + break; + } + udelay(10); + } + } +} + +static bool dce_dmcu_setup_psr(struct dmcu *dmcu, + struct dc_link *link, + struct psr_context *psr_context) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + + unsigned int dmcu_max_retry_on_wait_reg_ready = 801; + unsigned int dmcu_wait_reg_ready_interval = 100; + + union dce_dmcu_psr_config_data_reg1 masterCmdData1; + union dce_dmcu_psr_config_data_reg2 masterCmdData2; + union dce_dmcu_psr_config_data_reg3 masterCmdData3; + + link->link_enc->funcs->psr_program_dp_dphy_fast_training(link->link_enc, + psr_context->psrExitLinkTrainingRequired); + + /* Enable static screen interrupts for PSR supported display */ + /* Disable the interrupt coming from other displays. */ + REG_UPDATE_4(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN1_INT_TO_UC_EN, 0, + STATIC_SCREEN2_INT_TO_UC_EN, 0, + STATIC_SCREEN3_INT_TO_UC_EN, 0, + STATIC_SCREEN4_INT_TO_UC_EN, 0); + + switch (psr_context->controllerId) { + /* Driver uses case 1 for unconfigured */ + case 1: + REG_UPDATE(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN1_INT_TO_UC_EN, 1); + break; + case 2: + REG_UPDATE(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN2_INT_TO_UC_EN, 1); + break; + case 3: + REG_UPDATE(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN3_INT_TO_UC_EN, 1); + break; + case 4: + REG_UPDATE(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN4_INT_TO_UC_EN, 1); + break; + case 5: + /* CZ/NL only has 4 CRTC!! + * really valid. + * There is no interrupt enable mask for these instances. + */ + break; + case 6: + /* CZ/NL only has 4 CRTC!! + * These are here because they are defined in HW regspec, + * but not really valid. There is no interrupt enable mask + * for these instances. + */ + break; + default: + REG_UPDATE(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN1_INT_TO_UC_EN, 1); + break; + } + + link->link_enc->funcs->psr_program_secondary_packet(link->link_enc, + psr_context->sdpTransmitLineNumDeadline); + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, + dmcu_wait_reg_ready_interval, + dmcu_max_retry_on_wait_reg_ready); + + /* setDMCUParam_PSRHostConfigData */ + masterCmdData1.u32All = 0; + masterCmdData1.bits.timehyst_frames = psr_context->timehyst_frames; + masterCmdData1.bits.hyst_lines = psr_context->hyst_lines; + masterCmdData1.bits.rfb_update_auto_en = + psr_context->rfb_update_auto_en; + masterCmdData1.bits.dp_port_num = psr_context->transmitterId; + masterCmdData1.bits.dcp_sel = psr_context->controllerId; + masterCmdData1.bits.phy_type = psr_context->phyType; + masterCmdData1.bits.frame_cap_ind = + psr_context->psrFrameCaptureIndicationReq; + masterCmdData1.bits.aux_chan = psr_context->channel; + masterCmdData1.bits.aux_repeat = psr_context->aux_repeats; + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG1), + masterCmdData1.u32All); + + masterCmdData2.u32All = 0; + masterCmdData2.bits.dig_fe = psr_context->engineId; + masterCmdData2.bits.dig_be = psr_context->transmitterId; + masterCmdData2.bits.skip_wait_for_pll_lock = + psr_context->skipPsrWaitForPllLock; + masterCmdData2.bits.frame_delay = psr_context->frame_delay; + masterCmdData2.bits.smu_phy_id = psr_context->smuPhyId; + masterCmdData2.bits.num_of_controllers = + psr_context->numberOfControllers; + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG2), + masterCmdData2.u32All); + + masterCmdData3.u32All = 0; + masterCmdData3.bits.psr_level = psr_context->psr_level.u32all; + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG3), + masterCmdData3.u32All); + + /* setDMCUParam_Cmd */ + REG_UPDATE(MASTER_COMM_CMD_REG, + MASTER_COMM_CMD_REG_BYTE0, PSR_SET); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + + return true; +} + +static bool dce_is_dmcu_initialized(struct dmcu *dmcu) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + unsigned int dmcu_uc_reset; + + /* microcontroller is not running */ + REG_GET(DMCU_STATUS, UC_IN_RESET, &dmcu_uc_reset); + + /* DMCU is not running */ + if (dmcu_uc_reset) + return false; + + return true; +} + +static void dce_psr_wait_loop( + struct dmcu *dmcu, + unsigned int wait_loop_number) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + union dce_dmcu_psr_config_data_wait_loop_reg1 masterCmdData1; + + if (dmcu->cached_wait_loop_number == wait_loop_number) + return; + + /* DMCU is not running */ + if (!dce_is_dmcu_initialized(dmcu)) + return; + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000); + + masterCmdData1.u32 = 0; + masterCmdData1.bits.wait_loop = wait_loop_number; + dmcu->cached_wait_loop_number = wait_loop_number; + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG1), masterCmdData1.u32); + + /* setDMCUParam_Cmd */ + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, PSR_SET_WAITLOOP); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); +} + +static void dce_get_psr_wait_loop( + struct dmcu *dmcu, unsigned int *psr_wait_loop_number) +{ + *psr_wait_loop_number = dmcu->cached_wait_loop_number; + return; +} + +static void dcn10_get_dmcu_version(struct dmcu *dmcu) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + uint32_t dmcu_version_offset = 0xf1; + + /* Enable write access to IRAM */ + REG_UPDATE_2(DMCU_RAM_ACCESS_CTRL, + IRAM_HOST_ACCESS_EN, 1, + IRAM_RD_ADDR_AUTO_INC, 1); + + REG_WAIT(DMU_MEM_PWR_CNTL, DMCU_IRAM_MEM_PWR_STATE, 0, 2, 10); + + /* Write address to IRAM_RD_ADDR and read from DATA register */ + REG_WRITE(DMCU_IRAM_RD_CTRL, dmcu_version_offset); + dmcu->dmcu_version.interface_version = REG_READ(DMCU_IRAM_RD_DATA); + dmcu->dmcu_version.abm_version = REG_READ(DMCU_IRAM_RD_DATA); + dmcu->dmcu_version.psr_version = REG_READ(DMCU_IRAM_RD_DATA); + dmcu->dmcu_version.build_version = ((REG_READ(DMCU_IRAM_RD_DATA) << 8) | + REG_READ(DMCU_IRAM_RD_DATA)); + + /* Disable write access to IRAM to allow dynamic sleep state */ + REG_UPDATE_2(DMCU_RAM_ACCESS_CTRL, + IRAM_HOST_ACCESS_EN, 0, + IRAM_RD_ADDR_AUTO_INC, 0); +} + +static void dcn10_dmcu_enable_fractional_pwm(struct dmcu *dmcu, + uint32_t fractional_pwm) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + + /* Wait until microcontroller is ready to process interrupt */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 100, 800); + + /* Set PWM fractional enable/disable */ + REG_WRITE(MASTER_COMM_DATA_REG1, fractional_pwm); + + /* Set command to enable or disable fractional PWM microcontroller */ + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, + MCP_BL_SET_PWM_FRAC); + + /* Notify microcontroller of new command */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + + /* Ensure command has been executed before continuing */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 100, 800); +} + +static bool dcn10_dmcu_init(struct dmcu *dmcu) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + const struct dc_config *config = &dmcu->ctx->dc->config; + bool status = false; + struct dc_context *ctx = dmcu->ctx; + unsigned int i; + // 5 4 3 2 1 0 + // F E D C B A - bit 0 is A, bit 5 is F + unsigned int tx_interrupt_mask = 0; + + PERF_TRACE(); + /* Definition of DC_DMCU_SCRATCH + * 0 : firmare not loaded + * 1 : PSP load DMCU FW but not initialized + * 2 : Firmware already initialized + */ + dmcu->dmcu_state = REG_READ(DC_DMCU_SCRATCH); + + for (i = 0; i < ctx->dc->link_count; i++) { + if (ctx->dc->links[i]->link_enc->features.flags.bits.DP_IS_USB_C) { + if (ctx->dc->links[i]->link_enc->transmitter >= TRANSMITTER_UNIPHY_A && + ctx->dc->links[i]->link_enc->transmitter <= TRANSMITTER_UNIPHY_F) { + tx_interrupt_mask |= 1 << ctx->dc->links[i]->link_enc->transmitter; + } + } + } + + switch (dmcu->dmcu_state) { + case DMCU_UNLOADED: + status = false; + break; + case DMCU_LOADED_UNINITIALIZED: + /* Wait until microcontroller is ready to process interrupt */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 100, 800); + + /* Set initialized ramping boundary value */ + REG_WRITE(MASTER_COMM_DATA_REG1, 0xFFFF); + + /* Set backlight ramping stepsize */ + REG_WRITE(MASTER_COMM_DATA_REG2, abm_gain_stepsize); + + REG_WRITE(MASTER_COMM_DATA_REG3, tx_interrupt_mask); + + /* Set command to initialize microcontroller */ + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, + MCP_INIT_DMCU); + + /* Notify microcontroller of new command */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + + /* Ensure command has been executed before continuing */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 100, 800); + + // Check state is initialized + dmcu->dmcu_state = REG_READ(DC_DMCU_SCRATCH); + + // If microcontroller is not in running state, fail + if (dmcu->dmcu_state == DMCU_RUNNING) { + /* Retrieve and cache the DMCU firmware version. */ + dcn10_get_dmcu_version(dmcu); + + /* Initialize DMCU to use fractional PWM or not */ + dcn10_dmcu_enable_fractional_pwm(dmcu, + (config->disable_fractional_pwm == false) ? 1 : 0); + status = true; + } else { + status = false; + } + + break; + case DMCU_RUNNING: + status = true; + break; + default: + status = false; + break; + } + + PERF_TRACE(); + return status; +} + +static bool dcn21_dmcu_init(struct dmcu *dmcu) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + uint32_t dmcub_psp_version = REG_READ(DMCUB_SCRATCH15); + + if (dmcu->auto_load_dmcu && dmcub_psp_version == 0) { + return false; + } + + return dcn10_dmcu_init(dmcu); +} + +static bool dcn10_dmcu_load_iram(struct dmcu *dmcu, + unsigned int start_offset, + const char *src, + unsigned int bytes) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + unsigned int count = 0; + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return false; + + /* Enable write access to IRAM */ + REG_UPDATE_2(DMCU_RAM_ACCESS_CTRL, + IRAM_HOST_ACCESS_EN, 1, + IRAM_WR_ADDR_AUTO_INC, 1); + + REG_WAIT(DMU_MEM_PWR_CNTL, DMCU_IRAM_MEM_PWR_STATE, 0, 2, 10); + + REG_WRITE(DMCU_IRAM_WR_CTRL, start_offset); + + for (count = 0; count < bytes; count++) + REG_WRITE(DMCU_IRAM_WR_DATA, src[count]); + + /* Disable write access to IRAM to allow dynamic sleep state */ + REG_UPDATE_2(DMCU_RAM_ACCESS_CTRL, + IRAM_HOST_ACCESS_EN, 0, + IRAM_WR_ADDR_AUTO_INC, 0); + + /* Wait until microcontroller is ready to process interrupt */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 100, 800); + + /* Set command to signal IRAM is loaded and to initialize IRAM */ + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, + MCP_INIT_IRAM); + + /* Notify microcontroller of new command */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + + /* Ensure command has been executed before continuing */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 100, 800); + + return true; +} + +static void dcn10_get_dmcu_psr_state(struct dmcu *dmcu, enum dc_psr_state *state) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + + uint32_t psr_state_offset = 0xf0; + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return; + + /* Enable write access to IRAM */ + REG_UPDATE(DMCU_RAM_ACCESS_CTRL, IRAM_HOST_ACCESS_EN, 1); + + REG_WAIT(DMU_MEM_PWR_CNTL, DMCU_IRAM_MEM_PWR_STATE, 0, 2, 10); + + /* Write address to IRAM_RD_ADDR in DMCU_IRAM_RD_CTRL */ + REG_WRITE(DMCU_IRAM_RD_CTRL, psr_state_offset); + + /* Read data from IRAM_RD_DATA in DMCU_IRAM_RD_DATA*/ + *state = (enum dc_psr_state)REG_READ(DMCU_IRAM_RD_DATA); + + /* Disable write access to IRAM after finished using IRAM + * in order to allow dynamic sleep state + */ + REG_UPDATE(DMCU_RAM_ACCESS_CTRL, IRAM_HOST_ACCESS_EN, 0); +} + +static void dcn10_dmcu_set_psr_enable(struct dmcu *dmcu, bool enable, bool wait) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + unsigned int dmcu_max_retry_on_wait_reg_ready = 801; + unsigned int dmcu_wait_reg_ready_interval = 100; + + unsigned int retryCount; + enum dc_psr_state state = PSR_STATE0; + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return; + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, + dmcu_wait_reg_ready_interval, + dmcu_max_retry_on_wait_reg_ready); + + /* setDMCUParam_Cmd */ + if (enable) + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, + PSR_ENABLE); + else + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, + PSR_EXIT); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + + /* Below loops 1000 x 500us = 500 ms. + * Exit PSR may need to wait 1-2 frames to power up. Timeout after at + * least a few frames. Should never hit the max retry assert below. + */ + if (wait == true) { + for (retryCount = 0; retryCount <= 1000; retryCount++) { + dcn10_get_dmcu_psr_state(dmcu, &state); + if (enable) { + if (state != PSR_STATE0) + break; + } else { + if (state == PSR_STATE0) + break; + } + udelay(500); + } + + /* assert if max retry hit */ + if (retryCount >= 1000) + ASSERT(0); + } +} + +static bool dcn10_dmcu_setup_psr(struct dmcu *dmcu, + struct dc_link *link, + struct psr_context *psr_context) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + + unsigned int dmcu_max_retry_on_wait_reg_ready = 801; + unsigned int dmcu_wait_reg_ready_interval = 100; + + union dce_dmcu_psr_config_data_reg1 masterCmdData1; + union dce_dmcu_psr_config_data_reg2 masterCmdData2; + union dce_dmcu_psr_config_data_reg3 masterCmdData3; + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return false; + + link->link_enc->funcs->psr_program_dp_dphy_fast_training(link->link_enc, + psr_context->psrExitLinkTrainingRequired); + + /* Enable static screen interrupts for PSR supported display */ + /* Disable the interrupt coming from other displays. */ + REG_UPDATE_4(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN1_INT_TO_UC_EN, 0, + STATIC_SCREEN2_INT_TO_UC_EN, 0, + STATIC_SCREEN3_INT_TO_UC_EN, 0, + STATIC_SCREEN4_INT_TO_UC_EN, 0); + + switch (psr_context->controllerId) { + /* Driver uses case 1 for unconfigured */ + case 1: + REG_UPDATE(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN1_INT_TO_UC_EN, 1); + break; + case 2: + REG_UPDATE(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN2_INT_TO_UC_EN, 1); + break; + case 3: + REG_UPDATE(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN3_INT_TO_UC_EN, 1); + break; + case 4: + REG_UPDATE(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN4_INT_TO_UC_EN, 1); + break; + case 5: + /* CZ/NL only has 4 CRTC!! + * really valid. + * There is no interrupt enable mask for these instances. + */ + break; + case 6: + /* CZ/NL only has 4 CRTC!! + * These are here because they are defined in HW regspec, + * but not really valid. There is no interrupt enable mask + * for these instances. + */ + break; + default: + REG_UPDATE(DMCU_INTERRUPT_TO_UC_EN_MASK, + STATIC_SCREEN1_INT_TO_UC_EN, 1); + break; + } + + link->link_enc->funcs->psr_program_secondary_packet(link->link_enc, + psr_context->sdpTransmitLineNumDeadline); + + if (psr_context->allow_smu_optimizations) + REG_UPDATE(SMU_INTERRUPT_CONTROL, DC_SMU_INT_ENABLE, 1); + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, + dmcu_wait_reg_ready_interval, + dmcu_max_retry_on_wait_reg_ready); + + /* setDMCUParam_PSRHostConfigData */ + masterCmdData1.u32All = 0; + masterCmdData1.bits.timehyst_frames = psr_context->timehyst_frames; + masterCmdData1.bits.hyst_lines = psr_context->hyst_lines; + masterCmdData1.bits.rfb_update_auto_en = + psr_context->rfb_update_auto_en; + masterCmdData1.bits.dp_port_num = psr_context->transmitterId; + masterCmdData1.bits.dcp_sel = psr_context->controllerId; + masterCmdData1.bits.phy_type = psr_context->phyType; + masterCmdData1.bits.frame_cap_ind = + psr_context->psrFrameCaptureIndicationReq; + masterCmdData1.bits.aux_chan = psr_context->channel; + masterCmdData1.bits.aux_repeat = psr_context->aux_repeats; + masterCmdData1.bits.allow_smu_optimizations = psr_context->allow_smu_optimizations; + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG1), + masterCmdData1.u32All); + + masterCmdData2.u32All = 0; + masterCmdData2.bits.dig_fe = psr_context->engineId; + masterCmdData2.bits.dig_be = psr_context->transmitterId; + masterCmdData2.bits.skip_wait_for_pll_lock = + psr_context->skipPsrWaitForPllLock; + masterCmdData2.bits.frame_delay = psr_context->frame_delay; + masterCmdData2.bits.smu_phy_id = psr_context->smuPhyId; + masterCmdData2.bits.num_of_controllers = + psr_context->numberOfControllers; + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG2), + masterCmdData2.u32All); + + masterCmdData3.u32All = 0; + masterCmdData3.bits.psr_level = psr_context->psr_level.u32all; + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG3), + masterCmdData3.u32All); + + + /* setDMCUParam_Cmd */ + REG_UPDATE(MASTER_COMM_CMD_REG, + MASTER_COMM_CMD_REG_BYTE0, PSR_SET); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000); + + return true; +} + +static void dcn10_psr_wait_loop( + struct dmcu *dmcu, + unsigned int wait_loop_number) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + union dce_dmcu_psr_config_data_wait_loop_reg1 masterCmdData1; + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return; + + if (wait_loop_number != 0) { + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000); + + masterCmdData1.u32 = 0; + masterCmdData1.bits.wait_loop = wait_loop_number; + dmcu->cached_wait_loop_number = wait_loop_number; + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG1), masterCmdData1.u32); + + /* setDMCUParam_Cmd */ + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, PSR_SET_WAITLOOP); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + } +} + +static void dcn10_get_psr_wait_loop( + struct dmcu *dmcu, unsigned int *psr_wait_loop_number) +{ + *psr_wait_loop_number = dmcu->cached_wait_loop_number; + return; +} + +static bool dcn10_is_dmcu_initialized(struct dmcu *dmcu) +{ + /* microcontroller is not running */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return false; + return true; +} + + + +static bool dcn20_lock_phy(struct dmcu *dmcu) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return false; + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000); + + /* setDMCUParam_Cmd */ + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, MCP_SYNC_PHY_LOCK); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000); + + return true; +} + +static bool dcn20_unlock_phy(struct dmcu *dmcu) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return false; + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000); + + /* setDMCUParam_Cmd */ + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, MCP_SYNC_PHY_UNLOCK); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000); + + return true; +} + +static bool dcn10_send_edid_cea(struct dmcu *dmcu, + int offset, + int total_length, + uint8_t *data, + int length) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + uint32_t header, data1, data2; + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return false; + + if (length > 8 || length <= 0) + return false; + + header = ((uint32_t)offset & 0xFFFF) << 16 | (total_length & 0xFFFF); + data1 = (((uint32_t)data[0]) << 24) | (((uint32_t)data[1]) << 16) | + (((uint32_t)data[2]) << 8) | ((uint32_t)data[3]); + data2 = (((uint32_t)data[4]) << 24) | (((uint32_t)data[5]) << 16) | + (((uint32_t)data[6]) << 8) | ((uint32_t)data[7]); + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000); + + /* setDMCUParam_Cmd */ + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, MCP_SEND_EDID_CEA); + + REG_WRITE(MASTER_COMM_DATA_REG1, header); + REG_WRITE(MASTER_COMM_DATA_REG2, data1); + REG_WRITE(MASTER_COMM_DATA_REG3, data2); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000); + + return true; +} + +static bool dcn10_get_scp_results(struct dmcu *dmcu, + uint32_t *cmd, + uint32_t *data1, + uint32_t *data2, + uint32_t *data3) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return false; + + *cmd = REG_READ(SLAVE_COMM_CMD_REG); + *data1 = REG_READ(SLAVE_COMM_DATA_REG1); + *data2 = REG_READ(SLAVE_COMM_DATA_REG2); + *data3 = REG_READ(SLAVE_COMM_DATA_REG3); + + /* clear SCP interrupt */ + REG_UPDATE(SLAVE_COMM_CNTL_REG, SLAVE_COMM_INTERRUPT, 0); + + return true; +} + +static bool dcn10_recv_amd_vsdb(struct dmcu *dmcu, + int *version, + int *min_frame_rate, + int *max_frame_rate) +{ + uint32_t data[4]; + int cmd, ack, len; + + if (!dcn10_get_scp_results(dmcu, &data[0], &data[1], &data[2], &data[3])) + return false; + + cmd = data[0] & 0x3FF; + len = (data[0] >> 10) & 0x3F; + ack = data[1]; + + if (cmd != MCP_SEND_EDID_CEA || ack != EDID_CEA_CMD_ACK || len != 12) + return false; + + if ((data[2] & 0xFF)) { + *version = (data[2] >> 8) & 0xFF; + *min_frame_rate = (data[3] >> 16) & 0xFFFF; + *max_frame_rate = data[3] & 0xFFFF; + return true; + } + + return false; +} + +static bool dcn10_recv_edid_cea_ack(struct dmcu *dmcu, int *offset) +{ + uint32_t data[4]; + int cmd, ack; + + if (!dcn10_get_scp_results(dmcu, + &data[0], &data[1], &data[2], &data[3])) + return false; + + cmd = data[0] & 0x3FF; + ack = data[1]; + + if (cmd != MCP_SEND_EDID_CEA) + return false; + + if (ack == EDID_CEA_CMD_ACK) + return true; + + *offset = data[2]; /* nack */ + return false; +} + + +#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY) +static void dcn10_forward_crc_window(struct dmcu *dmcu, + struct rect *rect, + struct otg_phy_mux *mux_mapping) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + unsigned int dmcu_max_retry_on_wait_reg_ready = 801; + unsigned int dmcu_wait_reg_ready_interval = 100; + unsigned int crc_start = 0, crc_end = 0, otg_phy_mux = 0; + int x_start, y_start, x_end, y_end; + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return; + + if (!rect) + return; + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, + dmcu_wait_reg_ready_interval, + dmcu_max_retry_on_wait_reg_ready); + + x_start = rect->x; + y_start = rect->y; + x_end = x_start + rect->width; + y_end = y_start + rect->height; + + /* build up nitification data */ + crc_start = (((unsigned int) x_start) << 16) | y_start; + crc_end = (((unsigned int) x_end) << 16) | y_end; + otg_phy_mux = + (((unsigned int) mux_mapping->otg_output_num) << 16) | mux_mapping->phy_output_num; + + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG1), + crc_start); + + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG2), + crc_end); + + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG3), + otg_phy_mux); + + /* setDMCUParam_Cmd */ + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, + CRC_WIN_NOTIFY); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); +} + +static void dcn10_stop_crc_win_update(struct dmcu *dmcu, + struct otg_phy_mux *mux_mapping) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu); + unsigned int dmcu_max_retry_on_wait_reg_ready = 801; + unsigned int dmcu_wait_reg_ready_interval = 100; + unsigned int otg_phy_mux = 0; + + /* If microcontroller is not running, do nothing */ + if (dmcu->dmcu_state != DMCU_RUNNING) + return; + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, + dmcu_wait_reg_ready_interval, + dmcu_max_retry_on_wait_reg_ready); + + /* build up nitification data */ + otg_phy_mux = + (((unsigned int) mux_mapping->otg_output_num) << 16) | mux_mapping->phy_output_num; + + dm_write_reg(dmcu->ctx, REG(MASTER_COMM_DATA_REG1), + otg_phy_mux); + + /* setDMCUParam_Cmd */ + REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, + CRC_STOP_UPDATE); + + /* notifyDMCUMsg */ + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); +} +#endif + +static const struct dmcu_funcs dce_funcs = { + .dmcu_init = dce_dmcu_init, + .load_iram = dce_dmcu_load_iram, + .set_psr_enable = dce_dmcu_set_psr_enable, + .setup_psr = dce_dmcu_setup_psr, + .get_psr_state = dce_get_dmcu_psr_state, + .set_psr_wait_loop = dce_psr_wait_loop, + .get_psr_wait_loop = dce_get_psr_wait_loop, + .is_dmcu_initialized = dce_is_dmcu_initialized +}; + +static const struct dmcu_funcs dcn10_funcs = { + .dmcu_init = dcn10_dmcu_init, + .load_iram = dcn10_dmcu_load_iram, + .set_psr_enable = dcn10_dmcu_set_psr_enable, + .setup_psr = dcn10_dmcu_setup_psr, + .get_psr_state = dcn10_get_dmcu_psr_state, + .set_psr_wait_loop = dcn10_psr_wait_loop, + .get_psr_wait_loop = dcn10_get_psr_wait_loop, + .send_edid_cea = dcn10_send_edid_cea, + .recv_amd_vsdb = dcn10_recv_amd_vsdb, + .recv_edid_cea_ack = dcn10_recv_edid_cea_ack, +#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY) + .forward_crc_window = dcn10_forward_crc_window, + .stop_crc_win_update = dcn10_stop_crc_win_update, +#endif + .is_dmcu_initialized = dcn10_is_dmcu_initialized +}; + +static const struct dmcu_funcs dcn20_funcs = { + .dmcu_init = dcn10_dmcu_init, + .load_iram = dcn10_dmcu_load_iram, + .set_psr_enable = dcn10_dmcu_set_psr_enable, + .setup_psr = dcn10_dmcu_setup_psr, + .get_psr_state = dcn10_get_dmcu_psr_state, + .set_psr_wait_loop = dcn10_psr_wait_loop, + .get_psr_wait_loop = dcn10_get_psr_wait_loop, + .is_dmcu_initialized = dcn10_is_dmcu_initialized, + .lock_phy = dcn20_lock_phy, + .unlock_phy = dcn20_unlock_phy +}; + +static const struct dmcu_funcs dcn21_funcs = { + .dmcu_init = dcn21_dmcu_init, + .load_iram = dcn10_dmcu_load_iram, + .set_psr_enable = dcn10_dmcu_set_psr_enable, + .setup_psr = dcn10_dmcu_setup_psr, + .get_psr_state = dcn10_get_dmcu_psr_state, + .set_psr_wait_loop = dcn10_psr_wait_loop, + .get_psr_wait_loop = dcn10_get_psr_wait_loop, + .is_dmcu_initialized = dcn10_is_dmcu_initialized, + .lock_phy = dcn20_lock_phy, + .unlock_phy = dcn20_unlock_phy +}; + +static void dce_dmcu_construct( + struct dce_dmcu *dmcu_dce, + struct dc_context *ctx, + const struct dce_dmcu_registers *regs, + const struct dce_dmcu_shift *dmcu_shift, + const struct dce_dmcu_mask *dmcu_mask) +{ + struct dmcu *base = &dmcu_dce->base; + + base->ctx = ctx; + base->funcs = &dce_funcs; + base->cached_wait_loop_number = 0; + + dmcu_dce->regs = regs; + dmcu_dce->dmcu_shift = dmcu_shift; + dmcu_dce->dmcu_mask = dmcu_mask; +} + +static void dcn21_dmcu_construct( + struct dce_dmcu *dmcu_dce, + struct dc_context *ctx, + const struct dce_dmcu_registers *regs, + const struct dce_dmcu_shift *dmcu_shift, + const struct dce_dmcu_mask *dmcu_mask) +{ + uint32_t psp_version = 0; + + dce_dmcu_construct(dmcu_dce, ctx, regs, dmcu_shift, dmcu_mask); + + if (!IS_FPGA_MAXIMUS_DC(ctx->dce_environment)) { + psp_version = dm_read_reg(ctx, mmMP0_SMN_C2PMSG_58); + dmcu_dce->base.auto_load_dmcu = ((psp_version & 0x00FF00FF) > 0x00110029); + dmcu_dce->base.psp_version = psp_version; + } +} + +struct dmcu *dce_dmcu_create( + struct dc_context *ctx, + const struct dce_dmcu_registers *regs, + const struct dce_dmcu_shift *dmcu_shift, + const struct dce_dmcu_mask *dmcu_mask) +{ + struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_KERNEL); + + if (dmcu_dce == NULL) { + BREAK_TO_DEBUGGER(); + return NULL; + } + + dce_dmcu_construct( + dmcu_dce, ctx, regs, dmcu_shift, dmcu_mask); + + dmcu_dce->base.funcs = &dce_funcs; + + return &dmcu_dce->base; +} + +struct dmcu *dcn10_dmcu_create( + struct dc_context *ctx, + const struct dce_dmcu_registers *regs, + const struct dce_dmcu_shift *dmcu_shift, + const struct dce_dmcu_mask *dmcu_mask) +{ + struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC); + + if (dmcu_dce == NULL) { + BREAK_TO_DEBUGGER(); + return NULL; + } + + dce_dmcu_construct( + dmcu_dce, ctx, regs, dmcu_shift, dmcu_mask); + + dmcu_dce->base.funcs = &dcn10_funcs; + + return &dmcu_dce->base; +} + +struct dmcu *dcn20_dmcu_create( + struct dc_context *ctx, + const struct dce_dmcu_registers *regs, + const struct dce_dmcu_shift *dmcu_shift, + const struct dce_dmcu_mask *dmcu_mask) +{ + struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC); + + if (dmcu_dce == NULL) { + BREAK_TO_DEBUGGER(); + return NULL; + } + + dce_dmcu_construct( + dmcu_dce, ctx, regs, dmcu_shift, dmcu_mask); + + dmcu_dce->base.funcs = &dcn20_funcs; + + return &dmcu_dce->base; +} + +struct dmcu *dcn21_dmcu_create( + struct dc_context *ctx, + const struct dce_dmcu_registers *regs, + const struct dce_dmcu_shift *dmcu_shift, + const struct dce_dmcu_mask *dmcu_mask) +{ + struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC); + + if (dmcu_dce == NULL) { + BREAK_TO_DEBUGGER(); + return NULL; + } + + dcn21_dmcu_construct( + dmcu_dce, ctx, regs, dmcu_shift, dmcu_mask); + + dmcu_dce->base.funcs = &dcn21_funcs; + + return &dmcu_dce->base; +} + +void dce_dmcu_destroy(struct dmcu **dmcu) +{ + struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(*dmcu); + + kfree(dmcu_dce); + *dmcu = NULL; +} |