diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c')
-rw-r--r-- | drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c | 1019 |
1 files changed, 1019 insertions, 0 deletions
diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c new file mode 100644 index 000000000..daa508504 --- /dev/null +++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c @@ -0,0 +1,1019 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2014 Free Electrons + * Copyright (C) 2014 Atmel + * + * Author: Boris BREZILLON <boris.brezillon@free-electrons.com> + */ + +#include <linux/dmapool.h> +#include <linux/mfd/atmel-hlcdc.h> + +#include <drm/drm_atomic.h> +#include <drm/drm_atomic_helper.h> +#include <drm/drm_blend.h> +#include <drm/drm_fb_dma_helper.h> +#include <drm/drm_fourcc.h> +#include <drm/drm_framebuffer.h> +#include <drm/drm_gem_dma_helper.h> + +#include "atmel_hlcdc_dc.h" + +/** + * struct atmel_hlcdc_plane_state - Atmel HLCDC Plane state structure. + * + * @base: DRM plane state + * @crtc_x: x position of the plane relative to the CRTC + * @crtc_y: y position of the plane relative to the CRTC + * @crtc_w: visible width of the plane + * @crtc_h: visible height of the plane + * @src_x: x buffer position + * @src_y: y buffer position + * @src_w: buffer width + * @src_h: buffer height + * @disc_x: x discard position + * @disc_y: y discard position + * @disc_w: discard width + * @disc_h: discard height + * @ahb_id: AHB identification number + * @bpp: bytes per pixel deduced from pixel_format + * @offsets: offsets to apply to the GEM buffers + * @xstride: value to add to the pixel pointer between each line + * @pstride: value to add to the pixel pointer between each pixel + * @nplanes: number of planes (deduced from pixel_format) + * @dscrs: DMA descriptors + */ +struct atmel_hlcdc_plane_state { + struct drm_plane_state base; + int crtc_x; + int crtc_y; + unsigned int crtc_w; + unsigned int crtc_h; + uint32_t src_x; + uint32_t src_y; + uint32_t src_w; + uint32_t src_h; + + int disc_x; + int disc_y; + int disc_w; + int disc_h; + + int ahb_id; + + /* These fields are private and should not be touched */ + int bpp[ATMEL_HLCDC_LAYER_MAX_PLANES]; + unsigned int offsets[ATMEL_HLCDC_LAYER_MAX_PLANES]; + int xstride[ATMEL_HLCDC_LAYER_MAX_PLANES]; + int pstride[ATMEL_HLCDC_LAYER_MAX_PLANES]; + int nplanes; + + /* DMA descriptors. */ + struct atmel_hlcdc_dma_channel_dscr *dscrs[ATMEL_HLCDC_LAYER_MAX_PLANES]; +}; + +static inline struct atmel_hlcdc_plane_state * +drm_plane_state_to_atmel_hlcdc_plane_state(struct drm_plane_state *s) +{ + return container_of(s, struct atmel_hlcdc_plane_state, base); +} + +#define SUBPIXEL_MASK 0xffff + +static uint32_t rgb_formats[] = { + DRM_FORMAT_C8, + DRM_FORMAT_XRGB4444, + DRM_FORMAT_ARGB4444, + DRM_FORMAT_RGBA4444, + DRM_FORMAT_ARGB1555, + DRM_FORMAT_RGB565, + DRM_FORMAT_RGB888, + DRM_FORMAT_XRGB8888, + DRM_FORMAT_ARGB8888, + DRM_FORMAT_RGBA8888, +}; + +struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_formats = { + .formats = rgb_formats, + .nformats = ARRAY_SIZE(rgb_formats), +}; + +static uint32_t rgb_and_yuv_formats[] = { + DRM_FORMAT_C8, + DRM_FORMAT_XRGB4444, + DRM_FORMAT_ARGB4444, + DRM_FORMAT_RGBA4444, + DRM_FORMAT_ARGB1555, + DRM_FORMAT_RGB565, + DRM_FORMAT_RGB888, + DRM_FORMAT_XRGB8888, + DRM_FORMAT_ARGB8888, + DRM_FORMAT_RGBA8888, + DRM_FORMAT_AYUV, + DRM_FORMAT_YUYV, + DRM_FORMAT_UYVY, + DRM_FORMAT_YVYU, + DRM_FORMAT_VYUY, + DRM_FORMAT_NV21, + DRM_FORMAT_NV61, + DRM_FORMAT_YUV422, + DRM_FORMAT_YUV420, +}; + +struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_and_yuv_formats = { + .formats = rgb_and_yuv_formats, + .nformats = ARRAY_SIZE(rgb_and_yuv_formats), +}; + +static int atmel_hlcdc_format_to_plane_mode(u32 format, u32 *mode) +{ + switch (format) { + case DRM_FORMAT_C8: + *mode = ATMEL_HLCDC_C8_MODE; + break; + case DRM_FORMAT_XRGB4444: + *mode = ATMEL_HLCDC_XRGB4444_MODE; + break; + case DRM_FORMAT_ARGB4444: + *mode = ATMEL_HLCDC_ARGB4444_MODE; + break; + case DRM_FORMAT_RGBA4444: + *mode = ATMEL_HLCDC_RGBA4444_MODE; + break; + case DRM_FORMAT_RGB565: + *mode = ATMEL_HLCDC_RGB565_MODE; + break; + case DRM_FORMAT_RGB888: + *mode = ATMEL_HLCDC_RGB888_MODE; + break; + case DRM_FORMAT_ARGB1555: + *mode = ATMEL_HLCDC_ARGB1555_MODE; + break; + case DRM_FORMAT_XRGB8888: + *mode = ATMEL_HLCDC_XRGB8888_MODE; + break; + case DRM_FORMAT_ARGB8888: + *mode = ATMEL_HLCDC_ARGB8888_MODE; + break; + case DRM_FORMAT_RGBA8888: + *mode = ATMEL_HLCDC_RGBA8888_MODE; + break; + case DRM_FORMAT_AYUV: + *mode = ATMEL_HLCDC_AYUV_MODE; + break; + case DRM_FORMAT_YUYV: + *mode = ATMEL_HLCDC_YUYV_MODE; + break; + case DRM_FORMAT_UYVY: + *mode = ATMEL_HLCDC_UYVY_MODE; + break; + case DRM_FORMAT_YVYU: + *mode = ATMEL_HLCDC_YVYU_MODE; + break; + case DRM_FORMAT_VYUY: + *mode = ATMEL_HLCDC_VYUY_MODE; + break; + case DRM_FORMAT_NV21: + *mode = ATMEL_HLCDC_NV21_MODE; + break; + case DRM_FORMAT_NV61: + *mode = ATMEL_HLCDC_NV61_MODE; + break; + case DRM_FORMAT_YUV420: + *mode = ATMEL_HLCDC_YUV420_MODE; + break; + case DRM_FORMAT_YUV422: + *mode = ATMEL_HLCDC_YUV422_MODE; + break; + default: + return -ENOTSUPP; + } + + return 0; +} + +static u32 heo_downscaling_xcoef[] = { + 0x11343311, + 0x000000f7, + 0x1635300c, + 0x000000f9, + 0x1b362c08, + 0x000000fb, + 0x1f372804, + 0x000000fe, + 0x24382400, + 0x00000000, + 0x28371ffe, + 0x00000004, + 0x2c361bfb, + 0x00000008, + 0x303516f9, + 0x0000000c, +}; + +static u32 heo_downscaling_ycoef[] = { + 0x00123737, + 0x00173732, + 0x001b382d, + 0x001f3928, + 0x00243824, + 0x0028391f, + 0x002d381b, + 0x00323717, +}; + +static u32 heo_upscaling_xcoef[] = { + 0xf74949f7, + 0x00000000, + 0xf55f33fb, + 0x000000fe, + 0xf5701efe, + 0x000000ff, + 0xf87c0dff, + 0x00000000, + 0x00800000, + 0x00000000, + 0x0d7cf800, + 0x000000ff, + 0x1e70f5ff, + 0x000000fe, + 0x335ff5fe, + 0x000000fb, +}; + +static u32 heo_upscaling_ycoef[] = { + 0x00004040, + 0x00075920, + 0x00056f0c, + 0x00027b03, + 0x00008000, + 0x00037b02, + 0x000c6f05, + 0x00205907, +}; + +#define ATMEL_HLCDC_XPHIDEF 4 +#define ATMEL_HLCDC_YPHIDEF 4 + +static u32 atmel_hlcdc_plane_phiscaler_get_factor(u32 srcsize, + u32 dstsize, + u32 phidef) +{ + u32 factor, max_memsize; + + factor = (256 * ((8 * (srcsize - 1)) - phidef)) / (dstsize - 1); + max_memsize = ((factor * (dstsize - 1)) + (256 * phidef)) / 2048; + + if (max_memsize > srcsize - 1) + factor--; + + return factor; +} + +static void +atmel_hlcdc_plane_scaler_set_phicoeff(struct atmel_hlcdc_plane *plane, + const u32 *coeff_tab, int size, + unsigned int cfg_offs) +{ + int i; + + for (i = 0; i < size; i++) + atmel_hlcdc_layer_write_cfg(&plane->layer, cfg_offs + i, + coeff_tab[i]); +} + +static void atmel_hlcdc_plane_setup_scaler(struct atmel_hlcdc_plane *plane, + struct atmel_hlcdc_plane_state *state) +{ + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; + u32 xfactor, yfactor; + + if (!desc->layout.scaler_config) + return; + + if (state->crtc_w == state->src_w && state->crtc_h == state->src_h) { + atmel_hlcdc_layer_write_cfg(&plane->layer, + desc->layout.scaler_config, 0); + return; + } + + if (desc->layout.phicoeffs.x) { + xfactor = atmel_hlcdc_plane_phiscaler_get_factor(state->src_w, + state->crtc_w, + ATMEL_HLCDC_XPHIDEF); + + yfactor = atmel_hlcdc_plane_phiscaler_get_factor(state->src_h, + state->crtc_h, + ATMEL_HLCDC_YPHIDEF); + + atmel_hlcdc_plane_scaler_set_phicoeff(plane, + state->crtc_w < state->src_w ? + heo_downscaling_xcoef : + heo_upscaling_xcoef, + ARRAY_SIZE(heo_upscaling_xcoef), + desc->layout.phicoeffs.x); + + atmel_hlcdc_plane_scaler_set_phicoeff(plane, + state->crtc_h < state->src_h ? + heo_downscaling_ycoef : + heo_upscaling_ycoef, + ARRAY_SIZE(heo_upscaling_ycoef), + desc->layout.phicoeffs.y); + } else { + xfactor = (1024 * state->src_w) / state->crtc_w; + yfactor = (1024 * state->src_h) / state->crtc_h; + } + + atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.scaler_config, + ATMEL_HLCDC_LAYER_SCALER_ENABLE | + ATMEL_HLCDC_LAYER_SCALER_FACTORS(xfactor, + yfactor)); +} + +static void +atmel_hlcdc_plane_update_pos_and_size(struct atmel_hlcdc_plane *plane, + struct atmel_hlcdc_plane_state *state) +{ + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; + + if (desc->layout.size) + atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.size, + ATMEL_HLCDC_LAYER_SIZE(state->crtc_w, + state->crtc_h)); + + if (desc->layout.memsize) + atmel_hlcdc_layer_write_cfg(&plane->layer, + desc->layout.memsize, + ATMEL_HLCDC_LAYER_SIZE(state->src_w, + state->src_h)); + + if (desc->layout.pos) + atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.pos, + ATMEL_HLCDC_LAYER_POS(state->crtc_x, + state->crtc_y)); + + atmel_hlcdc_plane_setup_scaler(plane, state); +} + +static void +atmel_hlcdc_plane_update_general_settings(struct atmel_hlcdc_plane *plane, + struct atmel_hlcdc_plane_state *state) +{ + unsigned int cfg = ATMEL_HLCDC_LAYER_DMA_BLEN_INCR16 | state->ahb_id; + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; + const struct drm_format_info *format = state->base.fb->format; + + /* + * Rotation optimization is not working on RGB888 (rotation is still + * working but without any optimization). + */ + if (format->format == DRM_FORMAT_RGB888) + cfg |= ATMEL_HLCDC_LAYER_DMA_ROTDIS; + + atmel_hlcdc_layer_write_cfg(&plane->layer, ATMEL_HLCDC_LAYER_DMA_CFG, + cfg); + + cfg = ATMEL_HLCDC_LAYER_DMA | ATMEL_HLCDC_LAYER_REP; + + if (plane->base.type != DRM_PLANE_TYPE_PRIMARY) { + cfg |= ATMEL_HLCDC_LAYER_OVR | ATMEL_HLCDC_LAYER_ITER2BL | + ATMEL_HLCDC_LAYER_ITER; + + if (format->has_alpha) + cfg |= ATMEL_HLCDC_LAYER_LAEN; + else + cfg |= ATMEL_HLCDC_LAYER_GAEN | + ATMEL_HLCDC_LAYER_GA(state->base.alpha); + } + + if (state->disc_h && state->disc_w) + cfg |= ATMEL_HLCDC_LAYER_DISCEN; + + atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.general_config, + cfg); +} + +static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane, + struct atmel_hlcdc_plane_state *state) +{ + u32 cfg; + int ret; + + ret = atmel_hlcdc_format_to_plane_mode(state->base.fb->format->format, + &cfg); + if (ret) + return; + + if ((state->base.fb->format->format == DRM_FORMAT_YUV422 || + state->base.fb->format->format == DRM_FORMAT_NV61) && + drm_rotation_90_or_270(state->base.rotation)) + cfg |= ATMEL_HLCDC_YUV422ROT; + + atmel_hlcdc_layer_write_cfg(&plane->layer, + ATMEL_HLCDC_LAYER_FORMAT_CFG, cfg); +} + +static void atmel_hlcdc_plane_update_clut(struct atmel_hlcdc_plane *plane, + struct atmel_hlcdc_plane_state *state) +{ + struct drm_crtc *crtc = state->base.crtc; + struct drm_color_lut *lut; + int idx; + + if (!crtc || !crtc->state) + return; + + if (!crtc->state->color_mgmt_changed || !crtc->state->gamma_lut) + return; + + lut = (struct drm_color_lut *)crtc->state->gamma_lut->data; + + for (idx = 0; idx < ATMEL_HLCDC_CLUT_SIZE; idx++, lut++) { + u32 val = ((lut->red << 8) & 0xff0000) | + (lut->green & 0xff00) | + (lut->blue >> 8); + + atmel_hlcdc_layer_write_clut(&plane->layer, idx, val); + } +} + +static void atmel_hlcdc_plane_update_buffers(struct atmel_hlcdc_plane *plane, + struct atmel_hlcdc_plane_state *state) +{ + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; + struct drm_framebuffer *fb = state->base.fb; + u32 sr; + int i; + + sr = atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHSR); + + for (i = 0; i < state->nplanes; i++) { + struct drm_gem_dma_object *gem = drm_fb_dma_get_gem_obj(fb, i); + + state->dscrs[i]->addr = gem->dma_addr + state->offsets[i]; + + atmel_hlcdc_layer_write_reg(&plane->layer, + ATMEL_HLCDC_LAYER_PLANE_HEAD(i), + state->dscrs[i]->self); + + if (!(sr & ATMEL_HLCDC_LAYER_EN)) { + atmel_hlcdc_layer_write_reg(&plane->layer, + ATMEL_HLCDC_LAYER_PLANE_ADDR(i), + state->dscrs[i]->addr); + atmel_hlcdc_layer_write_reg(&plane->layer, + ATMEL_HLCDC_LAYER_PLANE_CTRL(i), + state->dscrs[i]->ctrl); + atmel_hlcdc_layer_write_reg(&plane->layer, + ATMEL_HLCDC_LAYER_PLANE_NEXT(i), + state->dscrs[i]->self); + } + + if (desc->layout.xstride[i]) + atmel_hlcdc_layer_write_cfg(&plane->layer, + desc->layout.xstride[i], + state->xstride[i]); + + if (desc->layout.pstride[i]) + atmel_hlcdc_layer_write_cfg(&plane->layer, + desc->layout.pstride[i], + state->pstride[i]); + } +} + +int atmel_hlcdc_plane_prepare_ahb_routing(struct drm_crtc_state *c_state) +{ + unsigned int ahb_load[2] = { }; + struct drm_plane *plane; + + drm_atomic_crtc_state_for_each_plane(plane, c_state) { + struct atmel_hlcdc_plane_state *plane_state; + struct drm_plane_state *plane_s; + unsigned int pixels, load = 0; + int i; + + plane_s = drm_atomic_get_plane_state(c_state->state, plane); + if (IS_ERR(plane_s)) + return PTR_ERR(plane_s); + + plane_state = + drm_plane_state_to_atmel_hlcdc_plane_state(plane_s); + + pixels = (plane_state->src_w * plane_state->src_h) - + (plane_state->disc_w * plane_state->disc_h); + + for (i = 0; i < plane_state->nplanes; i++) + load += pixels * plane_state->bpp[i]; + + if (ahb_load[0] <= ahb_load[1]) + plane_state->ahb_id = 0; + else + plane_state->ahb_id = 1; + + ahb_load[plane_state->ahb_id] += load; + } + + return 0; +} + +int +atmel_hlcdc_plane_prepare_disc_area(struct drm_crtc_state *c_state) +{ + int disc_x = 0, disc_y = 0, disc_w = 0, disc_h = 0; + const struct atmel_hlcdc_layer_cfg_layout *layout; + struct atmel_hlcdc_plane_state *primary_state; + struct drm_plane_state *primary_s; + struct atmel_hlcdc_plane *primary; + struct drm_plane *ovl; + + primary = drm_plane_to_atmel_hlcdc_plane(c_state->crtc->primary); + layout = &primary->layer.desc->layout; + if (!layout->disc_pos || !layout->disc_size) + return 0; + + primary_s = drm_atomic_get_plane_state(c_state->state, + &primary->base); + if (IS_ERR(primary_s)) + return PTR_ERR(primary_s); + + primary_state = drm_plane_state_to_atmel_hlcdc_plane_state(primary_s); + + drm_atomic_crtc_state_for_each_plane(ovl, c_state) { + struct atmel_hlcdc_plane_state *ovl_state; + struct drm_plane_state *ovl_s; + + if (ovl == c_state->crtc->primary) + continue; + + ovl_s = drm_atomic_get_plane_state(c_state->state, ovl); + if (IS_ERR(ovl_s)) + return PTR_ERR(ovl_s); + + ovl_state = drm_plane_state_to_atmel_hlcdc_plane_state(ovl_s); + + if (!ovl_s->visible || + !ovl_s->fb || + ovl_s->fb->format->has_alpha || + ovl_s->alpha != DRM_BLEND_ALPHA_OPAQUE) + continue; + + /* TODO: implement a smarter hidden area detection */ + if (ovl_state->crtc_h * ovl_state->crtc_w < disc_h * disc_w) + continue; + + disc_x = ovl_state->crtc_x; + disc_y = ovl_state->crtc_y; + disc_h = ovl_state->crtc_h; + disc_w = ovl_state->crtc_w; + } + + primary_state->disc_x = disc_x; + primary_state->disc_y = disc_y; + primary_state->disc_w = disc_w; + primary_state->disc_h = disc_h; + + return 0; +} + +static void +atmel_hlcdc_plane_update_disc_area(struct atmel_hlcdc_plane *plane, + struct atmel_hlcdc_plane_state *state) +{ + const struct atmel_hlcdc_layer_cfg_layout *layout; + + layout = &plane->layer.desc->layout; + if (!layout->disc_pos || !layout->disc_size) + return; + + atmel_hlcdc_layer_write_cfg(&plane->layer, layout->disc_pos, + ATMEL_HLCDC_LAYER_DISC_POS(state->disc_x, + state->disc_y)); + + atmel_hlcdc_layer_write_cfg(&plane->layer, layout->disc_size, + ATMEL_HLCDC_LAYER_DISC_SIZE(state->disc_w, + state->disc_h)); +} + +static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p, + struct drm_atomic_state *state) +{ + struct drm_plane_state *s = drm_atomic_get_new_plane_state(state, p); + struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); + struct atmel_hlcdc_plane_state *hstate = + drm_plane_state_to_atmel_hlcdc_plane_state(s); + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; + struct drm_framebuffer *fb = hstate->base.fb; + const struct drm_display_mode *mode; + struct drm_crtc_state *crtc_state; + int ret; + int i; + + if (!hstate->base.crtc || WARN_ON(!fb)) + return 0; + + crtc_state = drm_atomic_get_existing_crtc_state(state, s->crtc); + mode = &crtc_state->adjusted_mode; + + ret = drm_atomic_helper_check_plane_state(s, crtc_state, + (1 << 16) / 2048, + INT_MAX, true, true); + if (ret || !s->visible) + return ret; + + hstate->src_x = s->src.x1; + hstate->src_y = s->src.y1; + hstate->src_w = drm_rect_width(&s->src); + hstate->src_h = drm_rect_height(&s->src); + hstate->crtc_x = s->dst.x1; + hstate->crtc_y = s->dst.y1; + hstate->crtc_w = drm_rect_width(&s->dst); + hstate->crtc_h = drm_rect_height(&s->dst); + + if ((hstate->src_x | hstate->src_y | hstate->src_w | hstate->src_h) & + SUBPIXEL_MASK) + return -EINVAL; + + hstate->src_x >>= 16; + hstate->src_y >>= 16; + hstate->src_w >>= 16; + hstate->src_h >>= 16; + + hstate->nplanes = fb->format->num_planes; + if (hstate->nplanes > ATMEL_HLCDC_LAYER_MAX_PLANES) + return -EINVAL; + + for (i = 0; i < hstate->nplanes; i++) { + unsigned int offset = 0; + int xdiv = i ? fb->format->hsub : 1; + int ydiv = i ? fb->format->vsub : 1; + + hstate->bpp[i] = fb->format->cpp[i]; + if (!hstate->bpp[i]) + return -EINVAL; + + switch (hstate->base.rotation & DRM_MODE_ROTATE_MASK) { + case DRM_MODE_ROTATE_90: + offset = (hstate->src_y / ydiv) * + fb->pitches[i]; + offset += ((hstate->src_x + hstate->src_w - 1) / + xdiv) * hstate->bpp[i]; + hstate->xstride[i] = -(((hstate->src_h - 1) / ydiv) * + fb->pitches[i]) - + (2 * hstate->bpp[i]); + hstate->pstride[i] = fb->pitches[i] - hstate->bpp[i]; + break; + case DRM_MODE_ROTATE_180: + offset = ((hstate->src_y + hstate->src_h - 1) / + ydiv) * fb->pitches[i]; + offset += ((hstate->src_x + hstate->src_w - 1) / + xdiv) * hstate->bpp[i]; + hstate->xstride[i] = ((((hstate->src_w - 1) / xdiv) - 1) * + hstate->bpp[i]) - fb->pitches[i]; + hstate->pstride[i] = -2 * hstate->bpp[i]; + break; + case DRM_MODE_ROTATE_270: + offset = ((hstate->src_y + hstate->src_h - 1) / + ydiv) * fb->pitches[i]; + offset += (hstate->src_x / xdiv) * hstate->bpp[i]; + hstate->xstride[i] = ((hstate->src_h - 1) / ydiv) * + fb->pitches[i]; + hstate->pstride[i] = -fb->pitches[i] - hstate->bpp[i]; + break; + case DRM_MODE_ROTATE_0: + default: + offset = (hstate->src_y / ydiv) * fb->pitches[i]; + offset += (hstate->src_x / xdiv) * hstate->bpp[i]; + hstate->xstride[i] = fb->pitches[i] - + ((hstate->src_w / xdiv) * + hstate->bpp[i]); + hstate->pstride[i] = 0; + break; + } + + hstate->offsets[i] = offset + fb->offsets[i]; + } + + /* + * Swap width and size in case of 90 or 270 degrees rotation + */ + if (drm_rotation_90_or_270(hstate->base.rotation)) { + swap(hstate->src_w, hstate->src_h); + } + + if (!desc->layout.size && + (mode->hdisplay != hstate->crtc_w || + mode->vdisplay != hstate->crtc_h)) + return -EINVAL; + + if ((hstate->crtc_h != hstate->src_h || hstate->crtc_w != hstate->src_w) && + (!desc->layout.memsize || + hstate->base.fb->format->has_alpha)) + return -EINVAL; + + return 0; +} + +static void atmel_hlcdc_plane_atomic_disable(struct drm_plane *p, + struct drm_atomic_state *state) +{ + struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); + + /* Disable interrupts */ + atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_IDR, + 0xffffffff); + + /* Disable the layer */ + atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHDR, + ATMEL_HLCDC_LAYER_RST | + ATMEL_HLCDC_LAYER_A2Q | + ATMEL_HLCDC_LAYER_UPDATE); + + /* Clear all pending interrupts */ + atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_ISR); +} + +static void atmel_hlcdc_plane_atomic_update(struct drm_plane *p, + struct drm_atomic_state *state) +{ + struct drm_plane_state *new_s = drm_atomic_get_new_plane_state(state, + p); + struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); + struct atmel_hlcdc_plane_state *hstate = + drm_plane_state_to_atmel_hlcdc_plane_state(new_s); + u32 sr; + + if (!new_s->crtc || !new_s->fb) + return; + + if (!hstate->base.visible) { + atmel_hlcdc_plane_atomic_disable(p, state); + return; + } + + atmel_hlcdc_plane_update_pos_and_size(plane, hstate); + atmel_hlcdc_plane_update_general_settings(plane, hstate); + atmel_hlcdc_plane_update_format(plane, hstate); + atmel_hlcdc_plane_update_clut(plane, hstate); + atmel_hlcdc_plane_update_buffers(plane, hstate); + atmel_hlcdc_plane_update_disc_area(plane, hstate); + + /* Enable the overrun interrupts. */ + atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_IER, + ATMEL_HLCDC_LAYER_OVR_IRQ(0) | + ATMEL_HLCDC_LAYER_OVR_IRQ(1) | + ATMEL_HLCDC_LAYER_OVR_IRQ(2)); + + /* Apply the new config at the next SOF event. */ + sr = atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHSR); + atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHER, + ATMEL_HLCDC_LAYER_UPDATE | + (sr & ATMEL_HLCDC_LAYER_EN ? + ATMEL_HLCDC_LAYER_A2Q : ATMEL_HLCDC_LAYER_EN)); +} + +static int atmel_hlcdc_plane_init_properties(struct atmel_hlcdc_plane *plane) +{ + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; + + if (desc->type == ATMEL_HLCDC_OVERLAY_LAYER || + desc->type == ATMEL_HLCDC_CURSOR_LAYER) { + int ret; + + ret = drm_plane_create_alpha_property(&plane->base); + if (ret) + return ret; + } + + if (desc->layout.xstride[0] && desc->layout.pstride[0]) { + int ret; + + ret = drm_plane_create_rotation_property(&plane->base, + DRM_MODE_ROTATE_0, + DRM_MODE_ROTATE_0 | + DRM_MODE_ROTATE_90 | + DRM_MODE_ROTATE_180 | + DRM_MODE_ROTATE_270); + if (ret) + return ret; + } + + if (desc->layout.csc) { + /* + * TODO: decare a "yuv-to-rgb-conv-factors" property to let + * userspace modify these factors (using a BLOB property ?). + */ + atmel_hlcdc_layer_write_cfg(&plane->layer, + desc->layout.csc, + 0x4c900091); + atmel_hlcdc_layer_write_cfg(&plane->layer, + desc->layout.csc + 1, + 0x7a5f5090); + atmel_hlcdc_layer_write_cfg(&plane->layer, + desc->layout.csc + 2, + 0x40040890); + } + + return 0; +} + +void atmel_hlcdc_plane_irq(struct atmel_hlcdc_plane *plane) +{ + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; + u32 isr; + + isr = atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_ISR); + + /* + * There's not much we can do in case of overrun except informing + * the user. However, we are in interrupt context here, hence the + * use of dev_dbg(). + */ + if (isr & + (ATMEL_HLCDC_LAYER_OVR_IRQ(0) | ATMEL_HLCDC_LAYER_OVR_IRQ(1) | + ATMEL_HLCDC_LAYER_OVR_IRQ(2))) + dev_dbg(plane->base.dev->dev, "overrun on plane %s\n", + desc->name); +} + +static const struct drm_plane_helper_funcs atmel_hlcdc_layer_plane_helper_funcs = { + .atomic_check = atmel_hlcdc_plane_atomic_check, + .atomic_update = atmel_hlcdc_plane_atomic_update, + .atomic_disable = atmel_hlcdc_plane_atomic_disable, +}; + +static int atmel_hlcdc_plane_alloc_dscrs(struct drm_plane *p, + struct atmel_hlcdc_plane_state *state) +{ + struct atmel_hlcdc_dc *dc = p->dev->dev_private; + int i; + + for (i = 0; i < ARRAY_SIZE(state->dscrs); i++) { + struct atmel_hlcdc_dma_channel_dscr *dscr; + dma_addr_t dscr_dma; + + dscr = dma_pool_alloc(dc->dscrpool, GFP_KERNEL, &dscr_dma); + if (!dscr) + goto err; + + dscr->addr = 0; + dscr->next = dscr_dma; + dscr->self = dscr_dma; + dscr->ctrl = ATMEL_HLCDC_LAYER_DFETCH; + + state->dscrs[i] = dscr; + } + + return 0; + +err: + for (i--; i >= 0; i--) { + dma_pool_free(dc->dscrpool, state->dscrs[i], + state->dscrs[i]->self); + } + + return -ENOMEM; +} + +static void atmel_hlcdc_plane_reset(struct drm_plane *p) +{ + struct atmel_hlcdc_plane_state *state; + + if (p->state) { + state = drm_plane_state_to_atmel_hlcdc_plane_state(p->state); + + if (state->base.fb) + drm_framebuffer_put(state->base.fb); + + kfree(state); + p->state = NULL; + } + + state = kzalloc(sizeof(*state), GFP_KERNEL); + if (state) { + if (atmel_hlcdc_plane_alloc_dscrs(p, state)) { + kfree(state); + dev_err(p->dev->dev, + "Failed to allocate initial plane state\n"); + return; + } + __drm_atomic_helper_plane_reset(p, &state->base); + } +} + +static struct drm_plane_state * +atmel_hlcdc_plane_atomic_duplicate_state(struct drm_plane *p) +{ + struct atmel_hlcdc_plane_state *state = + drm_plane_state_to_atmel_hlcdc_plane_state(p->state); + struct atmel_hlcdc_plane_state *copy; + + copy = kmemdup(state, sizeof(*state), GFP_KERNEL); + if (!copy) + return NULL; + + if (atmel_hlcdc_plane_alloc_dscrs(p, copy)) { + kfree(copy); + return NULL; + } + + if (copy->base.fb) + drm_framebuffer_get(copy->base.fb); + + return ©->base; +} + +static void atmel_hlcdc_plane_atomic_destroy_state(struct drm_plane *p, + struct drm_plane_state *s) +{ + struct atmel_hlcdc_plane_state *state = + drm_plane_state_to_atmel_hlcdc_plane_state(s); + struct atmel_hlcdc_dc *dc = p->dev->dev_private; + int i; + + for (i = 0; i < ARRAY_SIZE(state->dscrs); i++) { + dma_pool_free(dc->dscrpool, state->dscrs[i], + state->dscrs[i]->self); + } + + if (s->fb) + drm_framebuffer_put(s->fb); + + kfree(state); +} + +static const struct drm_plane_funcs layer_plane_funcs = { + .update_plane = drm_atomic_helper_update_plane, + .disable_plane = drm_atomic_helper_disable_plane, + .destroy = drm_plane_cleanup, + .reset = atmel_hlcdc_plane_reset, + .atomic_duplicate_state = atmel_hlcdc_plane_atomic_duplicate_state, + .atomic_destroy_state = atmel_hlcdc_plane_atomic_destroy_state, +}; + +static int atmel_hlcdc_plane_create(struct drm_device *dev, + const struct atmel_hlcdc_layer_desc *desc) +{ + struct atmel_hlcdc_dc *dc = dev->dev_private; + struct atmel_hlcdc_plane *plane; + enum drm_plane_type type; + int ret; + + plane = devm_kzalloc(dev->dev, sizeof(*plane), GFP_KERNEL); + if (!plane) + return -ENOMEM; + + atmel_hlcdc_layer_init(&plane->layer, desc, dc->hlcdc->regmap); + + if (desc->type == ATMEL_HLCDC_BASE_LAYER) + type = DRM_PLANE_TYPE_PRIMARY; + else if (desc->type == ATMEL_HLCDC_CURSOR_LAYER) + type = DRM_PLANE_TYPE_CURSOR; + else + type = DRM_PLANE_TYPE_OVERLAY; + + ret = drm_universal_plane_init(dev, &plane->base, 0, + &layer_plane_funcs, + desc->formats->formats, + desc->formats->nformats, + NULL, type, NULL); + if (ret) + return ret; + + drm_plane_helper_add(&plane->base, + &atmel_hlcdc_layer_plane_helper_funcs); + + /* Set default property values*/ + ret = atmel_hlcdc_plane_init_properties(plane); + if (ret) + return ret; + + dc->layers[desc->id] = &plane->layer; + + return 0; +} + +int atmel_hlcdc_create_planes(struct drm_device *dev) +{ + struct atmel_hlcdc_dc *dc = dev->dev_private; + const struct atmel_hlcdc_layer_desc *descs = dc->desc->layers; + int nlayers = dc->desc->nlayers; + int i, ret; + + dc->dscrpool = dmam_pool_create("atmel-hlcdc-dscr", dev->dev, + sizeof(struct atmel_hlcdc_dma_channel_dscr), + sizeof(u64), 0); + if (!dc->dscrpool) + return -ENOMEM; + + for (i = 0; i < nlayers; i++) { + if (descs[i].type != ATMEL_HLCDC_BASE_LAYER && + descs[i].type != ATMEL_HLCDC_OVERLAY_LAYER && + descs[i].type != ATMEL_HLCDC_CURSOR_LAYER) + continue; + + ret = atmel_hlcdc_plane_create(dev, &descs[i]); + if (ret) + return ret; + } + + return 0; +} |