From 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Tue, 21 Feb 2023 18:24:12 -0800 Subject: Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next Pull networking updates from Jakub Kicinski: "Core: - Add dedicated kmem_cache for typical/small skb->head, avoid having to access struct page at kfree time, and improve memory use. - Introduce sysctl to set default RPS configuration for new netdevs. - Define Netlink protocol specification format which can be used to describe messages used by each family and auto-generate parsers. Add tools for generating kernel data structures and uAPI headers. - Expose all net/core sysctls inside netns. - Remove 4s sleep in netpoll if carrier is instantly detected on boot. - Add configurable limit of MDB entries per port, and port-vlan. - Continue populating drop reasons throughout the stack. - Retire a handful of legacy Qdiscs and classifiers. Protocols: - Support IPv4 big TCP (TSO frames larger than 64kB). - Add IP_LOCAL_PORT_RANGE socket option, to control local port range on socket by socket basis. - Track and report in procfs number of MPTCP sockets used. - Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path manager. - IPv6: don't check net.ipv6.route.max_size and rely on garbage collection to free memory (similarly to IPv4). - Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986). - ICMP: add per-rate limit counters. - Add support for user scanning requests in ieee802154. - Remove static WEP support. - Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate reporting. - WiFi 7 EHT channel puncturing support (client & AP). BPF: - Add a rbtree data structure following the "next-gen data structure" precedent set by recently added linked list, that is, by using kfunc + kptr instead of adding a new BPF map type. - Expose XDP hints via kfuncs with initial support for RX hash and timestamp metadata. - Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to better support decap on GRE tunnel devices not operating in collect metadata. - Improve x86 JIT's codegen for PROBE_MEM runtime error checks. - Remove the need for trace_printk_lock for bpf_trace_printk and bpf_trace_vprintk helpers. - Extend libbpf's bpf_tracing.h support for tracing arguments of kprobes/uprobes and syscall as a special case. - Significantly reduce the search time for module symbols by livepatch and BPF. - Enable cpumasks to be used as kptrs, which is useful for tracing programs tracking which tasks end up running on which CPUs in different time intervals. - Add support for BPF trampoline on s390x and riscv64. - Add capability to export the XDP features supported by the NIC. - Add __bpf_kfunc tag for marking kernel functions as kfuncs. - Add cgroup.memory=nobpf kernel parameter option to disable BPF memory accounting for container environments. Netfilter: - Remove the CLUSTERIP target. It has been marked as obsolete for years, and we still have WARN splats wrt races of the out-of-band /proc interface installed by this target. - Add 'destroy' commands to nf_tables. They are identical to the existing 'delete' commands, but do not return an error if the referenced object (set, chain, rule...) did not exist. Driver API: - Improve cpumask_local_spread() locality to help NICs set the right IRQ affinity on AMD platforms. - Separate C22 and C45 MDIO bus transactions more clearly. - Introduce new DCB table to control DSCP rewrite on egress. - Support configuration of Physical Layer Collision Avoidance (PLCA) Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of shared medium Ethernet. - Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing preemption of low priority frames by high priority frames. - Add support for controlling MACSec offload using netlink SET. - Rework devlink instance refcounts to allow registration and de-registration under the instance lock. Split the code into multiple files, drop some of the unnecessarily granular locks and factor out common parts of netlink operation handling. - Add TX frame aggregation parameters (for USB drivers). - Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning messages with notifications for debug. - Allow offloading of UDP NEW connections via act_ct. - Add support for per action HW stats in TC. - Support hardware miss to TC action (continue processing in SW from a specific point in the action chain). - Warn if old Wireless Extension user space interface is used with modern cfg80211/mac80211 drivers. Do not support Wireless Extensions for Wi-Fi 7 devices at all. Everyone should switch to using nl80211 interface instead. - Improve the CAN bit timing configuration. Use extack to return error messages directly to user space, update the SJW handling, including the definition of a new default value that will benefit CAN-FD controllers, by increasing their oscillator tolerance. New hardware / drivers: - Ethernet: - nVidia BlueField-3 support (control traffic driver) - Ethernet support for imx93 SoCs - Motorcomm yt8531 gigabit Ethernet PHY - onsemi NCN26000 10BASE-T1S PHY (with support for PLCA) - Microchip LAN8841 PHY (incl. cable diagnostics and PTP) - Amlogic gxl MDIO mux - WiFi: - RealTek RTL8188EU (rtl8xxxu) - Qualcomm Wi-Fi 7 devices (ath12k) - CAN: - Renesas R-Car V4H Drivers: - Bluetooth: - Set Per Platform Antenna Gain (PPAG) for Intel controllers. - Ethernet NICs: - Intel (1G, igc): - support TSN / Qbv / packet scheduling features of i226 model - Intel (100G, ice): - use GNSS subsystem instead of TTY - multi-buffer XDP support - extend support for GPIO pins to E823 devices - nVidia/Mellanox: - update the shared buffer configuration on PFC commands - implement PTP adjphase function for HW offset control - TC support for Geneve and GRE with VF tunnel offload - more efficient crypto key management method - multi-port eswitch support - Netronome/Corigine: - add DCB IEEE support - support IPsec offloading for NFP3800 - Freescale/NXP (enetc): - support XDP_REDIRECT for XDP non-linear buffers - improve reconfig, avoid link flap and waiting for idle - support MAC Merge layer - Other NICs: - sfc/ef100: add basic devlink support for ef100 - ionic: rx_push mode operation (writing descriptors via MMIO) - bnxt: use the auxiliary bus abstraction for RDMA - r8169: disable ASPM and reset bus in case of tx timeout - cpsw: support QSGMII mode for J721e CPSW9G - cpts: support pulse-per-second output - ngbe: add an mdio bus driver - usbnet: optimize usbnet_bh() by avoiding unnecessary queuing - r8152: handle devices with FW with NCM support - amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation - virtio-net: support multi buffer XDP - virtio/vsock: replace virtio_vsock_pkt with sk_buff - tsnep: XDP support - Ethernet high-speed switches: - nVidia/Mellanox (mlxsw): - add support for latency TLV (in FW control messages) - Microchip (sparx5): - separate explicit and implicit traffic forwarding rules, make the implicit rules always active - add support for egress DSCP rewrite - IS0 VCAP support (Ingress Classification) - IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS etc.) - ES2 VCAP support (Egress Access Control) - support for Per-Stream Filtering and Policing (802.1Q, 8.6.5.1) - Ethernet embedded switches: - Marvell (mv88e6xxx): - add MAB (port auth) offload support - enable PTP receive for mv88e6390 - NXP (ocelot): - support MAC Merge layer - support for the the vsc7512 internal copper phys - Microchip: - lan9303: convert to PHYLINK - lan966x: support TC flower filter statistics - lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x - lan937x: support Credit Based Shaper configuration - ksz9477: support Energy Efficient Ethernet - other: - qca8k: convert to regmap read/write API, use bulk operations - rswitch: Improve TX timestamp accuracy - Intel WiFi (iwlwifi): - EHT (Wi-Fi 7) rate reporting - STEP equalizer support: transfer some STEP (connection to radio on platforms with integrated wifi) related parameters from the BIOS to the firmware. - Qualcomm 802.11ax WiFi (ath11k): - IPQ5018 support - Fine Timing Measurement (FTM) responder role support - channel 177 support - MediaTek WiFi (mt76): - per-PHY LED support - mt7996: EHT (Wi-Fi 7) support - Wireless Ethernet Dispatch (WED) reset support - switch to using page pool allocator - RealTek WiFi (rtw89): - support new version of Bluetooth co-existance - Mobile: - rmnet: support TX aggregation" * tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits) page_pool: add a comment explaining the fragment counter usage net: ethtool: fix __ethtool_dev_mm_supported() implementation ethtool: pse-pd: Fix double word in comments xsk: add linux/vmalloc.h to xsk.c sefltests: netdevsim: wait for devlink instance after netns removal selftest: fib_tests: Always cleanup before exit net/mlx5e: Align IPsec ASO result memory to be as required by hardware net/mlx5e: TC, Set CT miss to the specific ct action instance net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG net/mlx5: Refactor tc miss handling to a single function net/mlx5: Kconfig: Make tc offload depend on tc skb extension net/sched: flower: Support hardware miss to tc action net/sched: flower: Move filter handle initialization earlier net/sched: cls_api: Support hardware miss to tc action net/sched: Rename user cookie and act cookie sfc: fix builds without CONFIG_RTC_LIB sfc: clean up some inconsistent indentings net/mlx4_en: Introduce flexible array to silence overflow warning net: lan966x: Fix possible deadlock inside PTP net/ulp: Remove redundant ->clone() test in inet_clone_ulp(). ... --- drivers/net/dsa/sja1105/sja1105_clocking.c | 855 +++++++++++++++++++++++++++++ 1 file changed, 855 insertions(+) create mode 100644 drivers/net/dsa/sja1105/sja1105_clocking.c (limited to 'drivers/net/dsa/sja1105/sja1105_clocking.c') diff --git a/drivers/net/dsa/sja1105/sja1105_clocking.c b/drivers/net/dsa/sja1105/sja1105_clocking.c new file mode 100644 index 000000000..e3699f76f --- /dev/null +++ b/drivers/net/dsa/sja1105/sja1105_clocking.c @@ -0,0 +1,855 @@ +// SPDX-License-Identifier: BSD-3-Clause +/* Copyright 2016-2018 NXP + * Copyright (c) 2018-2019, Vladimir Oltean + */ +#include +#include "sja1105.h" + +#define SJA1105_SIZE_CGU_CMD 4 +#define SJA1110_BASE_MCSS_CLK SJA1110_CGU_ADDR(0x70) +#define SJA1110_BASE_TIMER_CLK SJA1110_CGU_ADDR(0x74) + +/* Common structure for CFG_PAD_MIIx_RX and CFG_PAD_MIIx_TX */ +struct sja1105_cfg_pad_mii { + u64 d32_os; + u64 d32_ih; + u64 d32_ipud; + u64 d10_ih; + u64 d10_os; + u64 d10_ipud; + u64 ctrl_os; + u64 ctrl_ih; + u64 ctrl_ipud; + u64 clk_os; + u64 clk_ih; + u64 clk_ipud; +}; + +struct sja1105_cfg_pad_mii_id { + u64 rxc_stable_ovr; + u64 rxc_delay; + u64 rxc_bypass; + u64 rxc_pd; + u64 txc_stable_ovr; + u64 txc_delay; + u64 txc_bypass; + u64 txc_pd; +}; + +/* UM10944 Table 82. + * IDIV_0_C to IDIV_4_C control registers + * (addr. 10000Bh to 10000Fh) + */ +struct sja1105_cgu_idiv { + u64 clksrc; + u64 autoblock; + u64 idiv; + u64 pd; +}; + +/* PLL_1_C control register + * + * SJA1105 E/T: UM10944 Table 81 (address 10000Ah) + * SJA1105 P/Q/R/S: UM11040 Table 116 (address 10000Ah) + */ +struct sja1105_cgu_pll_ctrl { + u64 pllclksrc; + u64 msel; + u64 autoblock; + u64 psel; + u64 direct; + u64 fbsel; + u64 bypass; + u64 pd; +}; + +struct sja1110_cgu_outclk { + u64 clksrc; + u64 autoblock; + u64 pd; +}; + +enum { + CLKSRC_MII0_TX_CLK = 0x00, + CLKSRC_MII0_RX_CLK = 0x01, + CLKSRC_MII1_TX_CLK = 0x02, + CLKSRC_MII1_RX_CLK = 0x03, + CLKSRC_MII2_TX_CLK = 0x04, + CLKSRC_MII2_RX_CLK = 0x05, + CLKSRC_MII3_TX_CLK = 0x06, + CLKSRC_MII3_RX_CLK = 0x07, + CLKSRC_MII4_TX_CLK = 0x08, + CLKSRC_MII4_RX_CLK = 0x09, + CLKSRC_PLL0 = 0x0B, + CLKSRC_PLL1 = 0x0E, + CLKSRC_IDIV0 = 0x11, + CLKSRC_IDIV1 = 0x12, + CLKSRC_IDIV2 = 0x13, + CLKSRC_IDIV3 = 0x14, + CLKSRC_IDIV4 = 0x15, +}; + +/* UM10944 Table 83. + * MIIx clock control registers 1 to 30 + * (addresses 100013h to 100035h) + */ +struct sja1105_cgu_mii_ctrl { + u64 clksrc; + u64 autoblock; + u64 pd; +}; + +static void sja1105_cgu_idiv_packing(void *buf, struct sja1105_cgu_idiv *idiv, + enum packing_op op) +{ + const int size = 4; + + sja1105_packing(buf, &idiv->clksrc, 28, 24, size, op); + sja1105_packing(buf, &idiv->autoblock, 11, 11, size, op); + sja1105_packing(buf, &idiv->idiv, 5, 2, size, op); + sja1105_packing(buf, &idiv->pd, 0, 0, size, op); +} + +static int sja1105_cgu_idiv_config(struct sja1105_private *priv, int port, + bool enabled, int factor) +{ + const struct sja1105_regs *regs = priv->info->regs; + struct device *dev = priv->ds->dev; + struct sja1105_cgu_idiv idiv; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + + if (regs->cgu_idiv[port] == SJA1105_RSV_ADDR) + return 0; + + if (enabled && factor != 1 && factor != 10) { + dev_err(dev, "idiv factor must be 1 or 10\n"); + return -ERANGE; + } + + /* Payload for packed_buf */ + idiv.clksrc = 0x0A; /* 25MHz */ + idiv.autoblock = 1; /* Block clk automatically */ + idiv.idiv = factor - 1; /* Divide by 1 or 10 */ + idiv.pd = enabled ? 0 : 1; /* Power down? */ + sja1105_cgu_idiv_packing(packed_buf, &idiv, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->cgu_idiv[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +static void +sja1105_cgu_mii_control_packing(void *buf, struct sja1105_cgu_mii_ctrl *cmd, + enum packing_op op) +{ + const int size = 4; + + sja1105_packing(buf, &cmd->clksrc, 28, 24, size, op); + sja1105_packing(buf, &cmd->autoblock, 11, 11, size, op); + sja1105_packing(buf, &cmd->pd, 0, 0, size, op); +} + +static int sja1105_cgu_mii_tx_clk_config(struct sja1105_private *priv, + int port, sja1105_mii_role_t role) +{ + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cgu_mii_ctrl mii_tx_clk; + const int mac_clk_sources[] = { + CLKSRC_MII0_TX_CLK, + CLKSRC_MII1_TX_CLK, + CLKSRC_MII2_TX_CLK, + CLKSRC_MII3_TX_CLK, + CLKSRC_MII4_TX_CLK, + }; + const int phy_clk_sources[] = { + CLKSRC_IDIV0, + CLKSRC_IDIV1, + CLKSRC_IDIV2, + CLKSRC_IDIV3, + CLKSRC_IDIV4, + }; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + int clksrc; + + if (regs->mii_tx_clk[port] == SJA1105_RSV_ADDR) + return 0; + + if (role == XMII_MAC) + clksrc = mac_clk_sources[port]; + else + clksrc = phy_clk_sources[port]; + + /* Payload for packed_buf */ + mii_tx_clk.clksrc = clksrc; + mii_tx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */ + mii_tx_clk.pd = 0; /* Power Down off => enabled */ + sja1105_cgu_mii_control_packing(packed_buf, &mii_tx_clk, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->mii_tx_clk[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +static int +sja1105_cgu_mii_rx_clk_config(struct sja1105_private *priv, int port) +{ + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cgu_mii_ctrl mii_rx_clk; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + const int clk_sources[] = { + CLKSRC_MII0_RX_CLK, + CLKSRC_MII1_RX_CLK, + CLKSRC_MII2_RX_CLK, + CLKSRC_MII3_RX_CLK, + CLKSRC_MII4_RX_CLK, + }; + + if (regs->mii_rx_clk[port] == SJA1105_RSV_ADDR) + return 0; + + /* Payload for packed_buf */ + mii_rx_clk.clksrc = clk_sources[port]; + mii_rx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */ + mii_rx_clk.pd = 0; /* Power Down off => enabled */ + sja1105_cgu_mii_control_packing(packed_buf, &mii_rx_clk, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->mii_rx_clk[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +static int +sja1105_cgu_mii_ext_tx_clk_config(struct sja1105_private *priv, int port) +{ + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cgu_mii_ctrl mii_ext_tx_clk; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + const int clk_sources[] = { + CLKSRC_IDIV0, + CLKSRC_IDIV1, + CLKSRC_IDIV2, + CLKSRC_IDIV3, + CLKSRC_IDIV4, + }; + + if (regs->mii_ext_tx_clk[port] == SJA1105_RSV_ADDR) + return 0; + + /* Payload for packed_buf */ + mii_ext_tx_clk.clksrc = clk_sources[port]; + mii_ext_tx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */ + mii_ext_tx_clk.pd = 0; /* Power Down off => enabled */ + sja1105_cgu_mii_control_packing(packed_buf, &mii_ext_tx_clk, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->mii_ext_tx_clk[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +static int +sja1105_cgu_mii_ext_rx_clk_config(struct sja1105_private *priv, int port) +{ + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cgu_mii_ctrl mii_ext_rx_clk; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + const int clk_sources[] = { + CLKSRC_IDIV0, + CLKSRC_IDIV1, + CLKSRC_IDIV2, + CLKSRC_IDIV3, + CLKSRC_IDIV4, + }; + + if (regs->mii_ext_rx_clk[port] == SJA1105_RSV_ADDR) + return 0; + + /* Payload for packed_buf */ + mii_ext_rx_clk.clksrc = clk_sources[port]; + mii_ext_rx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */ + mii_ext_rx_clk.pd = 0; /* Power Down off => enabled */ + sja1105_cgu_mii_control_packing(packed_buf, &mii_ext_rx_clk, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->mii_ext_rx_clk[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +static int sja1105_mii_clocking_setup(struct sja1105_private *priv, int port, + sja1105_mii_role_t role) +{ + struct device *dev = priv->ds->dev; + int rc; + + dev_dbg(dev, "Configuring MII-%s clocking\n", + (role == XMII_MAC) ? "MAC" : "PHY"); + /* If role is MAC, disable IDIV + * If role is PHY, enable IDIV and configure for 1/1 divider + */ + rc = sja1105_cgu_idiv_config(priv, port, (role == XMII_PHY), 1); + if (rc < 0) + return rc; + + /* Configure CLKSRC of MII_TX_CLK_n + * * If role is MAC, select TX_CLK_n + * * If role is PHY, select IDIV_n + */ + rc = sja1105_cgu_mii_tx_clk_config(priv, port, role); + if (rc < 0) + return rc; + + /* Configure CLKSRC of MII_RX_CLK_n + * Select RX_CLK_n + */ + rc = sja1105_cgu_mii_rx_clk_config(priv, port); + if (rc < 0) + return rc; + + if (role == XMII_PHY) { + /* Per MII spec, the PHY (which is us) drives the TX_CLK pin */ + + /* Configure CLKSRC of EXT_TX_CLK_n + * Select IDIV_n + */ + rc = sja1105_cgu_mii_ext_tx_clk_config(priv, port); + if (rc < 0) + return rc; + + /* Configure CLKSRC of EXT_RX_CLK_n + * Select IDIV_n + */ + rc = sja1105_cgu_mii_ext_rx_clk_config(priv, port); + if (rc < 0) + return rc; + } + return 0; +} + +static void +sja1105_cgu_pll_control_packing(void *buf, struct sja1105_cgu_pll_ctrl *cmd, + enum packing_op op) +{ + const int size = 4; + + sja1105_packing(buf, &cmd->pllclksrc, 28, 24, size, op); + sja1105_packing(buf, &cmd->msel, 23, 16, size, op); + sja1105_packing(buf, &cmd->autoblock, 11, 11, size, op); + sja1105_packing(buf, &cmd->psel, 9, 8, size, op); + sja1105_packing(buf, &cmd->direct, 7, 7, size, op); + sja1105_packing(buf, &cmd->fbsel, 6, 6, size, op); + sja1105_packing(buf, &cmd->bypass, 1, 1, size, op); + sja1105_packing(buf, &cmd->pd, 0, 0, size, op); +} + +static int sja1105_cgu_rgmii_tx_clk_config(struct sja1105_private *priv, + int port, u64 speed) +{ + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cgu_mii_ctrl txc; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + int clksrc; + + if (regs->rgmii_tx_clk[port] == SJA1105_RSV_ADDR) + return 0; + + if (speed == priv->info->port_speed[SJA1105_SPEED_1000MBPS]) { + clksrc = CLKSRC_PLL0; + } else { + int clk_sources[] = {CLKSRC_IDIV0, CLKSRC_IDIV1, CLKSRC_IDIV2, + CLKSRC_IDIV3, CLKSRC_IDIV4}; + clksrc = clk_sources[port]; + } + + /* RGMII: 125MHz for 1000, 25MHz for 100, 2.5MHz for 10 */ + txc.clksrc = clksrc; + /* Autoblock clk while changing clksrc */ + txc.autoblock = 1; + /* Power Down off => enabled */ + txc.pd = 0; + sja1105_cgu_mii_control_packing(packed_buf, &txc, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->rgmii_tx_clk[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +/* AGU */ +static void +sja1105_cfg_pad_mii_packing(void *buf, struct sja1105_cfg_pad_mii *cmd, + enum packing_op op) +{ + const int size = 4; + + sja1105_packing(buf, &cmd->d32_os, 28, 27, size, op); + sja1105_packing(buf, &cmd->d32_ih, 26, 26, size, op); + sja1105_packing(buf, &cmd->d32_ipud, 25, 24, size, op); + sja1105_packing(buf, &cmd->d10_os, 20, 19, size, op); + sja1105_packing(buf, &cmd->d10_ih, 18, 18, size, op); + sja1105_packing(buf, &cmd->d10_ipud, 17, 16, size, op); + sja1105_packing(buf, &cmd->ctrl_os, 12, 11, size, op); + sja1105_packing(buf, &cmd->ctrl_ih, 10, 10, size, op); + sja1105_packing(buf, &cmd->ctrl_ipud, 9, 8, size, op); + sja1105_packing(buf, &cmd->clk_os, 4, 3, size, op); + sja1105_packing(buf, &cmd->clk_ih, 2, 2, size, op); + sja1105_packing(buf, &cmd->clk_ipud, 1, 0, size, op); +} + +static int sja1105_rgmii_cfg_pad_tx_config(struct sja1105_private *priv, + int port) +{ + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cfg_pad_mii pad_mii_tx = {0}; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + + if (regs->pad_mii_tx[port] == SJA1105_RSV_ADDR) + return 0; + + /* Payload */ + pad_mii_tx.d32_os = 3; /* TXD[3:2] output stage: */ + /* high noise/high speed */ + pad_mii_tx.d10_os = 3; /* TXD[1:0] output stage: */ + /* high noise/high speed */ + pad_mii_tx.d32_ipud = 2; /* TXD[3:2] input stage: */ + /* plain input (default) */ + pad_mii_tx.d10_ipud = 2; /* TXD[1:0] input stage: */ + /* plain input (default) */ + pad_mii_tx.ctrl_os = 3; /* TX_CTL / TX_ER output stage */ + pad_mii_tx.ctrl_ipud = 2; /* TX_CTL / TX_ER input stage (default) */ + pad_mii_tx.clk_os = 3; /* TX_CLK output stage */ + pad_mii_tx.clk_ih = 0; /* TX_CLK input hysteresis (default) */ + pad_mii_tx.clk_ipud = 2; /* TX_CLK input stage (default) */ + sja1105_cfg_pad_mii_packing(packed_buf, &pad_mii_tx, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->pad_mii_tx[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +static int sja1105_cfg_pad_rx_config(struct sja1105_private *priv, int port) +{ + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cfg_pad_mii pad_mii_rx = {0}; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + + if (regs->pad_mii_rx[port] == SJA1105_RSV_ADDR) + return 0; + + /* Payload */ + pad_mii_rx.d32_ih = 0; /* RXD[3:2] input stage hysteresis: */ + /* non-Schmitt (default) */ + pad_mii_rx.d32_ipud = 2; /* RXD[3:2] input weak pull-up/down */ + /* plain input (default) */ + pad_mii_rx.d10_ih = 0; /* RXD[1:0] input stage hysteresis: */ + /* non-Schmitt (default) */ + pad_mii_rx.d10_ipud = 2; /* RXD[1:0] input weak pull-up/down */ + /* plain input (default) */ + pad_mii_rx.ctrl_ih = 0; /* RX_DV/CRS_DV/RX_CTL and RX_ER */ + /* input stage hysteresis: */ + /* non-Schmitt (default) */ + pad_mii_rx.ctrl_ipud = 3; /* RX_DV/CRS_DV/RX_CTL and RX_ER */ + /* input stage weak pull-up/down: */ + /* pull-down */ + pad_mii_rx.clk_os = 2; /* RX_CLK/RXC output stage: */ + /* medium noise/fast speed (default) */ + pad_mii_rx.clk_ih = 0; /* RX_CLK/RXC input hysteresis: */ + /* non-Schmitt (default) */ + pad_mii_rx.clk_ipud = 2; /* RX_CLK/RXC input pull-up/down: */ + /* plain input (default) */ + sja1105_cfg_pad_mii_packing(packed_buf, &pad_mii_rx, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->pad_mii_rx[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +static void +sja1105_cfg_pad_mii_id_packing(void *buf, struct sja1105_cfg_pad_mii_id *cmd, + enum packing_op op) +{ + const int size = SJA1105_SIZE_CGU_CMD; + + sja1105_packing(buf, &cmd->rxc_stable_ovr, 15, 15, size, op); + sja1105_packing(buf, &cmd->rxc_delay, 14, 10, size, op); + sja1105_packing(buf, &cmd->rxc_bypass, 9, 9, size, op); + sja1105_packing(buf, &cmd->rxc_pd, 8, 8, size, op); + sja1105_packing(buf, &cmd->txc_stable_ovr, 7, 7, size, op); + sja1105_packing(buf, &cmd->txc_delay, 6, 2, size, op); + sja1105_packing(buf, &cmd->txc_bypass, 1, 1, size, op); + sja1105_packing(buf, &cmd->txc_pd, 0, 0, size, op); +} + +static void +sja1110_cfg_pad_mii_id_packing(void *buf, struct sja1105_cfg_pad_mii_id *cmd, + enum packing_op op) +{ + const int size = SJA1105_SIZE_CGU_CMD; + u64 range = 4; + + /* Fields RXC_RANGE and TXC_RANGE select the input frequency range: + * 0 = 2.5MHz + * 1 = 25MHz + * 2 = 50MHz + * 3 = 125MHz + * 4 = Automatically determined by port speed. + * There's no point in defining a structure different than the one for + * SJA1105, so just hardcode the frequency range to automatic, just as + * before. + */ + sja1105_packing(buf, &cmd->rxc_stable_ovr, 26, 26, size, op); + sja1105_packing(buf, &cmd->rxc_delay, 25, 21, size, op); + sja1105_packing(buf, &range, 20, 18, size, op); + sja1105_packing(buf, &cmd->rxc_bypass, 17, 17, size, op); + sja1105_packing(buf, &cmd->rxc_pd, 16, 16, size, op); + sja1105_packing(buf, &cmd->txc_stable_ovr, 10, 10, size, op); + sja1105_packing(buf, &cmd->txc_delay, 9, 5, size, op); + sja1105_packing(buf, &range, 4, 2, size, op); + sja1105_packing(buf, &cmd->txc_bypass, 1, 1, size, op); + sja1105_packing(buf, &cmd->txc_pd, 0, 0, size, op); +} + +/* The RGMII delay setup procedure is 2-step and gets called upon each + * .phylink_mac_config. Both are strategic. + * The reason is that the RX Tunable Delay Line of the SJA1105 MAC has issues + * with recovering from a frequency change of the link partner's RGMII clock. + * The easiest way to recover from this is to temporarily power down the TDL, + * as it will re-lock at the new frequency afterwards. + */ +int sja1105pqrs_setup_rgmii_delay(const void *ctx, int port) +{ + const struct sja1105_private *priv = ctx; + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cfg_pad_mii_id pad_mii_id = {0}; + int rx_delay = priv->rgmii_rx_delay_ps[port]; + int tx_delay = priv->rgmii_tx_delay_ps[port]; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + int rc; + + if (rx_delay) + pad_mii_id.rxc_delay = SJA1105_RGMII_DELAY_PS_TO_HW(rx_delay); + if (tx_delay) + pad_mii_id.txc_delay = SJA1105_RGMII_DELAY_PS_TO_HW(tx_delay); + + /* Stage 1: Turn the RGMII delay lines off. */ + pad_mii_id.rxc_bypass = 1; + pad_mii_id.rxc_pd = 1; + pad_mii_id.txc_bypass = 1; + pad_mii_id.txc_pd = 1; + sja1105_cfg_pad_mii_id_packing(packed_buf, &pad_mii_id, PACK); + + rc = sja1105_xfer_buf(priv, SPI_WRITE, regs->pad_mii_id[port], + packed_buf, SJA1105_SIZE_CGU_CMD); + if (rc < 0) + return rc; + + /* Stage 2: Turn the RGMII delay lines on. */ + if (rx_delay) { + pad_mii_id.rxc_bypass = 0; + pad_mii_id.rxc_pd = 0; + } + if (tx_delay) { + pad_mii_id.txc_bypass = 0; + pad_mii_id.txc_pd = 0; + } + sja1105_cfg_pad_mii_id_packing(packed_buf, &pad_mii_id, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->pad_mii_id[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +int sja1110_setup_rgmii_delay(const void *ctx, int port) +{ + const struct sja1105_private *priv = ctx; + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cfg_pad_mii_id pad_mii_id = {0}; + int rx_delay = priv->rgmii_rx_delay_ps[port]; + int tx_delay = priv->rgmii_tx_delay_ps[port]; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + + pad_mii_id.rxc_pd = 1; + pad_mii_id.txc_pd = 1; + + if (rx_delay) { + pad_mii_id.rxc_delay = SJA1105_RGMII_DELAY_PS_TO_HW(rx_delay); + /* The "BYPASS" bit in SJA1110 is actually a "don't bypass" */ + pad_mii_id.rxc_bypass = 1; + pad_mii_id.rxc_pd = 0; + } + + if (tx_delay) { + pad_mii_id.txc_delay = SJA1105_RGMII_DELAY_PS_TO_HW(tx_delay); + pad_mii_id.txc_bypass = 1; + pad_mii_id.txc_pd = 0; + } + + sja1110_cfg_pad_mii_id_packing(packed_buf, &pad_mii_id, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->pad_mii_id[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +static int sja1105_rgmii_clocking_setup(struct sja1105_private *priv, int port, + sja1105_mii_role_t role) +{ + struct device *dev = priv->ds->dev; + struct sja1105_mac_config_entry *mac; + u64 speed; + int rc; + + mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries; + speed = mac[port].speed; + + dev_dbg(dev, "Configuring port %d RGMII at speed %lldMbps\n", + port, speed); + + if (speed == priv->info->port_speed[SJA1105_SPEED_1000MBPS]) { + /* 1000Mbps, IDIV disabled (125 MHz) */ + rc = sja1105_cgu_idiv_config(priv, port, false, 1); + } else if (speed == priv->info->port_speed[SJA1105_SPEED_100MBPS]) { + /* 100Mbps, IDIV enabled, divide by 1 (25 MHz) */ + rc = sja1105_cgu_idiv_config(priv, port, true, 1); + } else if (speed == priv->info->port_speed[SJA1105_SPEED_10MBPS]) { + /* 10Mbps, IDIV enabled, divide by 10 (2.5 MHz) */ + rc = sja1105_cgu_idiv_config(priv, port, true, 10); + } else if (speed == priv->info->port_speed[SJA1105_SPEED_AUTO]) { + /* Skip CGU configuration if there is no speed available + * (e.g. link is not established yet) + */ + dev_dbg(dev, "Speed not available, skipping CGU config\n"); + return 0; + } else { + rc = -EINVAL; + } + + if (rc < 0) { + dev_err(dev, "Failed to configure idiv\n"); + return rc; + } + rc = sja1105_cgu_rgmii_tx_clk_config(priv, port, speed); + if (rc < 0) { + dev_err(dev, "Failed to configure RGMII Tx clock\n"); + return rc; + } + rc = sja1105_rgmii_cfg_pad_tx_config(priv, port); + if (rc < 0) { + dev_err(dev, "Failed to configure Tx pad registers\n"); + return rc; + } + + if (!priv->info->setup_rgmii_delay) + return 0; + + return priv->info->setup_rgmii_delay(priv, port); +} + +static int sja1105_cgu_rmii_ref_clk_config(struct sja1105_private *priv, + int port) +{ + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cgu_mii_ctrl ref_clk; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + const int clk_sources[] = { + CLKSRC_MII0_TX_CLK, + CLKSRC_MII1_TX_CLK, + CLKSRC_MII2_TX_CLK, + CLKSRC_MII3_TX_CLK, + CLKSRC_MII4_TX_CLK, + }; + + if (regs->rmii_ref_clk[port] == SJA1105_RSV_ADDR) + return 0; + + /* Payload for packed_buf */ + ref_clk.clksrc = clk_sources[port]; + ref_clk.autoblock = 1; /* Autoblock clk while changing clksrc */ + ref_clk.pd = 0; /* Power Down off => enabled */ + sja1105_cgu_mii_control_packing(packed_buf, &ref_clk, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->rmii_ref_clk[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +static int +sja1105_cgu_rmii_ext_tx_clk_config(struct sja1105_private *priv, int port) +{ + const struct sja1105_regs *regs = priv->info->regs; + struct sja1105_cgu_mii_ctrl ext_tx_clk; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + + if (regs->rmii_ext_tx_clk[port] == SJA1105_RSV_ADDR) + return 0; + + /* Payload for packed_buf */ + ext_tx_clk.clksrc = CLKSRC_PLL1; + ext_tx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */ + ext_tx_clk.pd = 0; /* Power Down off => enabled */ + sja1105_cgu_mii_control_packing(packed_buf, &ext_tx_clk, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, regs->rmii_ext_tx_clk[port], + packed_buf, SJA1105_SIZE_CGU_CMD); +} + +static int sja1105_cgu_rmii_pll_config(struct sja1105_private *priv) +{ + const struct sja1105_regs *regs = priv->info->regs; + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + struct sja1105_cgu_pll_ctrl pll = {0}; + struct device *dev = priv->ds->dev; + int rc; + + if (regs->rmii_pll1 == SJA1105_RSV_ADDR) + return 0; + + /* PLL1 must be enabled and output 50 Mhz. + * This is done by writing first 0x0A010941 to + * the PLL_1_C register and then deasserting + * power down (PD) 0x0A010940. + */ + + /* Step 1: PLL1 setup for 50Mhz */ + pll.pllclksrc = 0xA; + pll.msel = 0x1; + pll.autoblock = 0x1; + pll.psel = 0x1; + pll.direct = 0x0; + pll.fbsel = 0x1; + pll.bypass = 0x0; + pll.pd = 0x1; + + sja1105_cgu_pll_control_packing(packed_buf, &pll, PACK); + rc = sja1105_xfer_buf(priv, SPI_WRITE, regs->rmii_pll1, packed_buf, + SJA1105_SIZE_CGU_CMD); + if (rc < 0) { + dev_err(dev, "failed to configure PLL1 for 50MHz\n"); + return rc; + } + + /* Step 2: Enable PLL1 */ + pll.pd = 0x0; + + sja1105_cgu_pll_control_packing(packed_buf, &pll, PACK); + rc = sja1105_xfer_buf(priv, SPI_WRITE, regs->rmii_pll1, packed_buf, + SJA1105_SIZE_CGU_CMD); + if (rc < 0) { + dev_err(dev, "failed to enable PLL1\n"); + return rc; + } + return rc; +} + +static int sja1105_rmii_clocking_setup(struct sja1105_private *priv, int port, + sja1105_mii_role_t role) +{ + struct device *dev = priv->ds->dev; + int rc; + + dev_dbg(dev, "Configuring RMII-%s clocking\n", + (role == XMII_MAC) ? "MAC" : "PHY"); + /* AH1601.pdf chapter 2.5.1. Sources */ + if (role == XMII_MAC) { + /* Configure and enable PLL1 for 50Mhz output */ + rc = sja1105_cgu_rmii_pll_config(priv); + if (rc < 0) + return rc; + } + /* Disable IDIV for this port */ + rc = sja1105_cgu_idiv_config(priv, port, false, 1); + if (rc < 0) + return rc; + /* Source to sink mappings */ + rc = sja1105_cgu_rmii_ref_clk_config(priv, port); + if (rc < 0) + return rc; + if (role == XMII_MAC) { + rc = sja1105_cgu_rmii_ext_tx_clk_config(priv, port); + if (rc < 0) + return rc; + } + return 0; +} + +int sja1105_clocking_setup_port(struct sja1105_private *priv, int port) +{ + struct sja1105_xmii_params_entry *mii; + struct device *dev = priv->ds->dev; + sja1105_phy_interface_t phy_mode; + sja1105_mii_role_t role; + int rc; + + mii = priv->static_config.tables[BLK_IDX_XMII_PARAMS].entries; + + /* RGMII etc */ + phy_mode = mii->xmii_mode[port]; + /* MAC or PHY, for applicable types (not RGMII) */ + role = mii->phy_mac[port]; + + switch (phy_mode) { + case XMII_MODE_MII: + rc = sja1105_mii_clocking_setup(priv, port, role); + break; + case XMII_MODE_RMII: + rc = sja1105_rmii_clocking_setup(priv, port, role); + break; + case XMII_MODE_RGMII: + rc = sja1105_rgmii_clocking_setup(priv, port, role); + break; + case XMII_MODE_SGMII: + /* Nothing to do in the CGU for SGMII */ + rc = 0; + break; + default: + dev_err(dev, "Invalid interface mode specified: %d\n", + phy_mode); + return -EINVAL; + } + if (rc) { + dev_err(dev, "Clocking setup for port %d failed: %d\n", + port, rc); + return rc; + } + + /* Internally pull down the RX_DV/CRS_DV/RX_CTL and RX_ER inputs */ + return sja1105_cfg_pad_rx_config(priv, port); +} + +int sja1105_clocking_setup(struct sja1105_private *priv) +{ + struct dsa_switch *ds = priv->ds; + int port, rc; + + for (port = 0; port < ds->num_ports; port++) { + rc = sja1105_clocking_setup_port(priv, port); + if (rc < 0) + return rc; + } + return 0; +} + +static void +sja1110_cgu_outclk_packing(void *buf, struct sja1110_cgu_outclk *outclk, + enum packing_op op) +{ + const int size = 4; + + sja1105_packing(buf, &outclk->clksrc, 27, 24, size, op); + sja1105_packing(buf, &outclk->autoblock, 11, 11, size, op); + sja1105_packing(buf, &outclk->pd, 0, 0, size, op); +} + +int sja1110_disable_microcontroller(struct sja1105_private *priv) +{ + u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0}; + struct sja1110_cgu_outclk outclk_6_c = { + .clksrc = 0x3, + .pd = true, + }; + struct sja1110_cgu_outclk outclk_7_c = { + .clksrc = 0x5, + .pd = true, + }; + int rc; + + /* Power down the BASE_TIMER_CLK to disable the watchdog timer */ + sja1110_cgu_outclk_packing(packed_buf, &outclk_7_c, PACK); + + rc = sja1105_xfer_buf(priv, SPI_WRITE, SJA1110_BASE_TIMER_CLK, + packed_buf, SJA1105_SIZE_CGU_CMD); + if (rc) + return rc; + + /* Power down the BASE_MCSS_CLOCK to gate the microcontroller off */ + sja1110_cgu_outclk_packing(packed_buf, &outclk_6_c, PACK); + + return sja1105_xfer_buf(priv, SPI_WRITE, SJA1110_BASE_MCSS_CLK, + packed_buf, SJA1105_SIZE_CGU_CMD); +} -- cgit v1.2.3