diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/mmc/host/mmci_stm32_sdmmc.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'drivers/mmc/host/mmci_stm32_sdmmc.c')
-rw-r--r-- | drivers/mmc/host/mmci_stm32_sdmmc.c | 597 |
1 files changed, 597 insertions, 0 deletions
diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c new file mode 100644 index 000000000..60bca78a7 --- /dev/null +++ b/drivers/mmc/host/mmci_stm32_sdmmc.c @@ -0,0 +1,597 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) STMicroelectronics 2018 - All Rights Reserved + * Author: Ludovic.barre@st.com for STMicroelectronics. + */ +#include <linux/bitfield.h> +#include <linux/delay.h> +#include <linux/dma-mapping.h> +#include <linux/iopoll.h> +#include <linux/mmc/host.h> +#include <linux/mmc/card.h> +#include <linux/of_address.h> +#include <linux/reset.h> +#include <linux/scatterlist.h> +#include "mmci.h" + +#define SDMMC_LLI_BUF_LEN PAGE_SIZE +#define SDMMC_IDMA_BURST BIT(MMCI_STM32_IDMABNDT_SHIFT) + +#define DLYB_CR 0x0 +#define DLYB_CR_DEN BIT(0) +#define DLYB_CR_SEN BIT(1) + +#define DLYB_CFGR 0x4 +#define DLYB_CFGR_SEL_MASK GENMASK(3, 0) +#define DLYB_CFGR_UNIT_MASK GENMASK(14, 8) +#define DLYB_CFGR_LNG_MASK GENMASK(27, 16) +#define DLYB_CFGR_LNGF BIT(31) + +#define DLYB_NB_DELAY 11 +#define DLYB_CFGR_SEL_MAX (DLYB_NB_DELAY + 1) +#define DLYB_CFGR_UNIT_MAX 127 + +#define DLYB_LNG_TIMEOUT_US 1000 +#define SDMMC_VSWEND_TIMEOUT_US 10000 + +struct sdmmc_lli_desc { + u32 idmalar; + u32 idmabase; + u32 idmasize; +}; + +struct sdmmc_idma { + dma_addr_t sg_dma; + void *sg_cpu; + dma_addr_t bounce_dma_addr; + void *bounce_buf; + bool use_bounce_buffer; +}; + +struct sdmmc_dlyb { + void __iomem *base; + u32 unit; + u32 max; +}; + +static int sdmmc_idma_validate_data(struct mmci_host *host, + struct mmc_data *data) +{ + struct sdmmc_idma *idma = host->dma_priv; + struct device *dev = mmc_dev(host->mmc); + struct scatterlist *sg; + int i; + + /* + * idma has constraints on idmabase & idmasize for each element + * excepted the last element which has no constraint on idmasize + */ + idma->use_bounce_buffer = false; + for_each_sg(data->sg, sg, data->sg_len - 1, i) { + if (!IS_ALIGNED(sg->offset, sizeof(u32)) || + !IS_ALIGNED(sg->length, SDMMC_IDMA_BURST)) { + dev_dbg(mmc_dev(host->mmc), + "unaligned scatterlist: ofst:%x length:%d\n", + data->sg->offset, data->sg->length); + goto use_bounce_buffer; + } + } + + if (!IS_ALIGNED(sg->offset, sizeof(u32))) { + dev_dbg(mmc_dev(host->mmc), + "unaligned last scatterlist: ofst:%x length:%d\n", + data->sg->offset, data->sg->length); + goto use_bounce_buffer; + } + + return 0; + +use_bounce_buffer: + if (!idma->bounce_buf) { + idma->bounce_buf = dmam_alloc_coherent(dev, + host->mmc->max_req_size, + &idma->bounce_dma_addr, + GFP_KERNEL); + if (!idma->bounce_buf) { + dev_err(dev, "Unable to map allocate DMA bounce buffer.\n"); + return -ENOMEM; + } + } + + idma->use_bounce_buffer = true; + + return 0; +} + +static int _sdmmc_idma_prep_data(struct mmci_host *host, + struct mmc_data *data) +{ + struct sdmmc_idma *idma = host->dma_priv; + + if (idma->use_bounce_buffer) { + if (data->flags & MMC_DATA_WRITE) { + unsigned int xfer_bytes = data->blksz * data->blocks; + + sg_copy_to_buffer(data->sg, data->sg_len, + idma->bounce_buf, xfer_bytes); + dma_wmb(); + } + } else { + int n_elem; + + n_elem = dma_map_sg(mmc_dev(host->mmc), + data->sg, + data->sg_len, + mmc_get_dma_dir(data)); + + if (!n_elem) { + dev_err(mmc_dev(host->mmc), "dma_map_sg failed\n"); + return -EINVAL; + } + } + return 0; +} + +static int sdmmc_idma_prep_data(struct mmci_host *host, + struct mmc_data *data, bool next) +{ + /* Check if job is already prepared. */ + if (!next && data->host_cookie == host->next_cookie) + return 0; + + return _sdmmc_idma_prep_data(host, data); +} + +static void sdmmc_idma_unprep_data(struct mmci_host *host, + struct mmc_data *data, int err) +{ + struct sdmmc_idma *idma = host->dma_priv; + + if (idma->use_bounce_buffer) { + if (data->flags & MMC_DATA_READ) { + unsigned int xfer_bytes = data->blksz * data->blocks; + + sg_copy_from_buffer(data->sg, data->sg_len, + idma->bounce_buf, xfer_bytes); + } + } else { + dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, + mmc_get_dma_dir(data)); + } +} + +static int sdmmc_idma_setup(struct mmci_host *host) +{ + struct sdmmc_idma *idma; + struct device *dev = mmc_dev(host->mmc); + + idma = devm_kzalloc(dev, sizeof(*idma), GFP_KERNEL); + if (!idma) + return -ENOMEM; + + host->dma_priv = idma; + + if (host->variant->dma_lli) { + idma->sg_cpu = dmam_alloc_coherent(dev, SDMMC_LLI_BUF_LEN, + &idma->sg_dma, GFP_KERNEL); + if (!idma->sg_cpu) { + dev_err(dev, "Failed to alloc IDMA descriptor\n"); + return -ENOMEM; + } + host->mmc->max_segs = SDMMC_LLI_BUF_LEN / + sizeof(struct sdmmc_lli_desc); + host->mmc->max_seg_size = host->variant->stm32_idmabsize_mask; + + host->mmc->max_req_size = SZ_1M; + } else { + host->mmc->max_segs = 1; + host->mmc->max_seg_size = host->mmc->max_req_size; + } + + return dma_set_max_seg_size(dev, host->mmc->max_seg_size); +} + +static int sdmmc_idma_start(struct mmci_host *host, unsigned int *datactrl) + +{ + struct sdmmc_idma *idma = host->dma_priv; + struct sdmmc_lli_desc *desc = (struct sdmmc_lli_desc *)idma->sg_cpu; + struct mmc_data *data = host->data; + struct scatterlist *sg; + int i; + + if (!host->variant->dma_lli || data->sg_len == 1 || + idma->use_bounce_buffer) { + u32 dma_addr; + + if (idma->use_bounce_buffer) + dma_addr = idma->bounce_dma_addr; + else + dma_addr = sg_dma_address(data->sg); + + writel_relaxed(dma_addr, + host->base + MMCI_STM32_IDMABASE0R); + writel_relaxed(MMCI_STM32_IDMAEN, + host->base + MMCI_STM32_IDMACTRLR); + return 0; + } + + for_each_sg(data->sg, sg, data->sg_len, i) { + desc[i].idmalar = (i + 1) * sizeof(struct sdmmc_lli_desc); + desc[i].idmalar |= MMCI_STM32_ULA | MMCI_STM32_ULS + | MMCI_STM32_ABR; + desc[i].idmabase = sg_dma_address(sg); + desc[i].idmasize = sg_dma_len(sg); + } + + /* notice the end of link list */ + desc[data->sg_len - 1].idmalar &= ~MMCI_STM32_ULA; + + dma_wmb(); + writel_relaxed(idma->sg_dma, host->base + MMCI_STM32_IDMABAR); + writel_relaxed(desc[0].idmalar, host->base + MMCI_STM32_IDMALAR); + writel_relaxed(desc[0].idmabase, host->base + MMCI_STM32_IDMABASE0R); + writel_relaxed(desc[0].idmasize, host->base + MMCI_STM32_IDMABSIZER); + writel_relaxed(MMCI_STM32_IDMAEN | MMCI_STM32_IDMALLIEN, + host->base + MMCI_STM32_IDMACTRLR); + + return 0; +} + +static void sdmmc_idma_finalize(struct mmci_host *host, struct mmc_data *data) +{ + writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR); + + if (!data->host_cookie) + sdmmc_idma_unprep_data(host, data, 0); +} + +static void mmci_sdmmc_set_clkreg(struct mmci_host *host, unsigned int desired) +{ + unsigned int clk = 0, ddr = 0; + + if (host->mmc->ios.timing == MMC_TIMING_MMC_DDR52 || + host->mmc->ios.timing == MMC_TIMING_UHS_DDR50) + ddr = MCI_STM32_CLK_DDR; + + /* + * cclk = mclk / (2 * clkdiv) + * clkdiv 0 => bypass + * in ddr mode bypass is not possible + */ + if (desired) { + if (desired >= host->mclk && !ddr) { + host->cclk = host->mclk; + } else { + clk = DIV_ROUND_UP(host->mclk, 2 * desired); + if (clk > MCI_STM32_CLK_CLKDIV_MSK) + clk = MCI_STM32_CLK_CLKDIV_MSK; + host->cclk = host->mclk / (2 * clk); + } + } else { + /* + * while power-on phase the clock can't be define to 0, + * Only power-off and power-cyc deactivate the clock. + * if desired clock is 0, set max divider + */ + clk = MCI_STM32_CLK_CLKDIV_MSK; + host->cclk = host->mclk / (2 * clk); + } + + /* Set actual clock for debug */ + if (host->mmc->ios.power_mode == MMC_POWER_ON) + host->mmc->actual_clock = host->cclk; + else + host->mmc->actual_clock = 0; + + if (host->mmc->ios.bus_width == MMC_BUS_WIDTH_4) + clk |= MCI_STM32_CLK_WIDEBUS_4; + if (host->mmc->ios.bus_width == MMC_BUS_WIDTH_8) + clk |= MCI_STM32_CLK_WIDEBUS_8; + + clk |= MCI_STM32_CLK_HWFCEN; + clk |= host->clk_reg_add; + clk |= ddr; + + /* + * SDMMC_FBCK is selected when an external Delay Block is needed + * with SDR104 or HS200. + */ + if (host->mmc->ios.timing >= MMC_TIMING_UHS_SDR50) { + clk |= MCI_STM32_CLK_BUSSPEED; + if (host->mmc->ios.timing == MMC_TIMING_UHS_SDR104 || + host->mmc->ios.timing == MMC_TIMING_MMC_HS200) { + clk &= ~MCI_STM32_CLK_SEL_MSK; + clk |= MCI_STM32_CLK_SELFBCK; + } + } + + mmci_write_clkreg(host, clk); +} + +static void sdmmc_dlyb_input_ck(struct sdmmc_dlyb *dlyb) +{ + if (!dlyb || !dlyb->base) + return; + + /* Output clock = Input clock */ + writel_relaxed(0, dlyb->base + DLYB_CR); +} + +static void mmci_sdmmc_set_pwrreg(struct mmci_host *host, unsigned int pwr) +{ + struct mmc_ios ios = host->mmc->ios; + struct sdmmc_dlyb *dlyb = host->variant_priv; + + /* adds OF options */ + pwr = host->pwr_reg_add; + + sdmmc_dlyb_input_ck(dlyb); + + if (ios.power_mode == MMC_POWER_OFF) { + /* Only a reset could power-off sdmmc */ + reset_control_assert(host->rst); + udelay(2); + reset_control_deassert(host->rst); + + /* + * Set the SDMMC in Power-cycle state. + * This will make that the SDMMC_D[7:0], SDMMC_CMD and SDMMC_CK + * are driven low, to prevent the Card from being supplied + * through the signal lines. + */ + mmci_write_pwrreg(host, MCI_STM32_PWR_CYC | pwr); + } else if (ios.power_mode == MMC_POWER_ON) { + /* + * After power-off (reset): the irq mask defined in probe + * functionis lost + * ault irq mask (probe) must be activated + */ + writel(MCI_IRQENABLE | host->variant->start_err, + host->base + MMCIMASK0); + + /* preserves voltage switch bits */ + pwr |= host->pwr_reg & (MCI_STM32_VSWITCHEN | + MCI_STM32_VSWITCH); + + /* + * After a power-cycle state, we must set the SDMMC in + * Power-off. The SDMMC_D[7:0], SDMMC_CMD and SDMMC_CK are + * driven high. Then we can set the SDMMC to Power-on state + */ + mmci_write_pwrreg(host, MCI_PWR_OFF | pwr); + mdelay(1); + mmci_write_pwrreg(host, MCI_PWR_ON | pwr); + } +} + +static u32 sdmmc_get_dctrl_cfg(struct mmci_host *host) +{ + u32 datactrl; + + datactrl = mmci_dctrl_blksz(host); + + if (host->mmc->card && mmc_card_sdio(host->mmc->card) && + host->data->blocks == 1) + datactrl |= MCI_DPSM_STM32_MODE_SDIO; + else if (host->data->stop && !host->mrq->sbc) + datactrl |= MCI_DPSM_STM32_MODE_BLOCK_STOP; + else + datactrl |= MCI_DPSM_STM32_MODE_BLOCK; + + return datactrl; +} + +static bool sdmmc_busy_complete(struct mmci_host *host, u32 status, u32 err_msk) +{ + void __iomem *base = host->base; + u32 busy_d0, busy_d0end, mask, sdmmc_status; + + mask = readl_relaxed(base + MMCIMASK0); + sdmmc_status = readl_relaxed(base + MMCISTATUS); + busy_d0end = sdmmc_status & MCI_STM32_BUSYD0END; + busy_d0 = sdmmc_status & MCI_STM32_BUSYD0; + + /* complete if there is an error or busy_d0end */ + if ((status & err_msk) || busy_d0end) + goto complete; + + /* + * On response the busy signaling is reflected in the BUSYD0 flag. + * if busy_d0 is in-progress we must activate busyd0end interrupt + * to wait this completion. Else this request has no busy step. + */ + if (busy_d0) { + if (!host->busy_status) { + writel_relaxed(mask | host->variant->busy_detect_mask, + base + MMCIMASK0); + host->busy_status = status & + (MCI_CMDSENT | MCI_CMDRESPEND); + } + return false; + } + +complete: + if (host->busy_status) { + writel_relaxed(mask & ~host->variant->busy_detect_mask, + base + MMCIMASK0); + host->busy_status = 0; + } + + writel_relaxed(host->variant->busy_detect_mask, base + MMCICLEAR); + + return true; +} + +static void sdmmc_dlyb_set_cfgr(struct sdmmc_dlyb *dlyb, + int unit, int phase, bool sampler) +{ + u32 cfgr; + + writel_relaxed(DLYB_CR_SEN | DLYB_CR_DEN, dlyb->base + DLYB_CR); + + cfgr = FIELD_PREP(DLYB_CFGR_UNIT_MASK, unit) | + FIELD_PREP(DLYB_CFGR_SEL_MASK, phase); + writel_relaxed(cfgr, dlyb->base + DLYB_CFGR); + + if (!sampler) + writel_relaxed(DLYB_CR_DEN, dlyb->base + DLYB_CR); +} + +static int sdmmc_dlyb_lng_tuning(struct mmci_host *host) +{ + struct sdmmc_dlyb *dlyb = host->variant_priv; + u32 cfgr; + int i, lng, ret; + + for (i = 0; i <= DLYB_CFGR_UNIT_MAX; i++) { + sdmmc_dlyb_set_cfgr(dlyb, i, DLYB_CFGR_SEL_MAX, true); + + ret = readl_relaxed_poll_timeout(dlyb->base + DLYB_CFGR, cfgr, + (cfgr & DLYB_CFGR_LNGF), + 1, DLYB_LNG_TIMEOUT_US); + if (ret) { + dev_warn(mmc_dev(host->mmc), + "delay line cfg timeout unit:%d cfgr:%d\n", + i, cfgr); + continue; + } + + lng = FIELD_GET(DLYB_CFGR_LNG_MASK, cfgr); + if (lng < BIT(DLYB_NB_DELAY) && lng > 0) + break; + } + + if (i > DLYB_CFGR_UNIT_MAX) + return -EINVAL; + + dlyb->unit = i; + dlyb->max = __fls(lng); + + return 0; +} + +static int sdmmc_dlyb_phase_tuning(struct mmci_host *host, u32 opcode) +{ + struct sdmmc_dlyb *dlyb = host->variant_priv; + int cur_len = 0, max_len = 0, end_of_len = 0; + int phase; + + for (phase = 0; phase <= dlyb->max; phase++) { + sdmmc_dlyb_set_cfgr(dlyb, dlyb->unit, phase, false); + + if (mmc_send_tuning(host->mmc, opcode, NULL)) { + cur_len = 0; + } else { + cur_len++; + if (cur_len > max_len) { + max_len = cur_len; + end_of_len = phase; + } + } + } + + if (!max_len) { + dev_err(mmc_dev(host->mmc), "no tuning point found\n"); + return -EINVAL; + } + + writel_relaxed(0, dlyb->base + DLYB_CR); + + phase = end_of_len - max_len / 2; + sdmmc_dlyb_set_cfgr(dlyb, dlyb->unit, phase, false); + + dev_dbg(mmc_dev(host->mmc), "unit:%d max_dly:%d phase:%d\n", + dlyb->unit, dlyb->max, phase); + + return 0; +} + +static int sdmmc_execute_tuning(struct mmc_host *mmc, u32 opcode) +{ + struct mmci_host *host = mmc_priv(mmc); + struct sdmmc_dlyb *dlyb = host->variant_priv; + + if (!dlyb || !dlyb->base) + return -EINVAL; + + if (sdmmc_dlyb_lng_tuning(host)) + return -EINVAL; + + return sdmmc_dlyb_phase_tuning(host, opcode); +} + +static void sdmmc_pre_sig_volt_vswitch(struct mmci_host *host) +{ + /* clear the voltage switch completion flag */ + writel_relaxed(MCI_STM32_VSWENDC, host->base + MMCICLEAR); + /* enable Voltage switch procedure */ + mmci_write_pwrreg(host, host->pwr_reg | MCI_STM32_VSWITCHEN); +} + +static int sdmmc_post_sig_volt_switch(struct mmci_host *host, + struct mmc_ios *ios) +{ + unsigned long flags; + u32 status; + int ret = 0; + + spin_lock_irqsave(&host->lock, flags); + if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_180 && + host->pwr_reg & MCI_STM32_VSWITCHEN) { + mmci_write_pwrreg(host, host->pwr_reg | MCI_STM32_VSWITCH); + spin_unlock_irqrestore(&host->lock, flags); + + /* wait voltage switch completion while 10ms */ + ret = readl_relaxed_poll_timeout(host->base + MMCISTATUS, + status, + (status & MCI_STM32_VSWEND), + 10, SDMMC_VSWEND_TIMEOUT_US); + + writel_relaxed(MCI_STM32_VSWENDC | MCI_STM32_CKSTOPC, + host->base + MMCICLEAR); + spin_lock_irqsave(&host->lock, flags); + mmci_write_pwrreg(host, host->pwr_reg & + ~(MCI_STM32_VSWITCHEN | MCI_STM32_VSWITCH)); + } + spin_unlock_irqrestore(&host->lock, flags); + + return ret; +} + +static struct mmci_host_ops sdmmc_variant_ops = { + .validate_data = sdmmc_idma_validate_data, + .prep_data = sdmmc_idma_prep_data, + .unprep_data = sdmmc_idma_unprep_data, + .get_datactrl_cfg = sdmmc_get_dctrl_cfg, + .dma_setup = sdmmc_idma_setup, + .dma_start = sdmmc_idma_start, + .dma_finalize = sdmmc_idma_finalize, + .set_clkreg = mmci_sdmmc_set_clkreg, + .set_pwrreg = mmci_sdmmc_set_pwrreg, + .busy_complete = sdmmc_busy_complete, + .pre_sig_volt_switch = sdmmc_pre_sig_volt_vswitch, + .post_sig_volt_switch = sdmmc_post_sig_volt_switch, +}; + +void sdmmc_variant_init(struct mmci_host *host) +{ + struct device_node *np = host->mmc->parent->of_node; + void __iomem *base_dlyb; + struct sdmmc_dlyb *dlyb; + + host->ops = &sdmmc_variant_ops; + host->pwr_reg = readl_relaxed(host->base + MMCIPOWER); + + base_dlyb = devm_of_iomap(mmc_dev(host->mmc), np, 1, NULL); + if (IS_ERR(base_dlyb)) + return; + + dlyb = devm_kzalloc(mmc_dev(host->mmc), sizeof(*dlyb), GFP_KERNEL); + if (!dlyb) + return; + + dlyb->base = base_dlyb; + host->variant_priv = dlyb; + host->mmc_ops->execute_tuning = sdmmc_execute_tuning; +} |