diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/gpu/drm/xen/xen_drm_front.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'drivers/gpu/drm/xen/xen_drm_front.c')
-rw-r--r-- | drivers/gpu/drm/xen/xen_drm_front.c | 799 |
1 files changed, 799 insertions, 0 deletions
diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c new file mode 100644 index 000000000..90996c108 --- /dev/null +++ b/drivers/gpu/drm/xen/xen_drm_front.c @@ -0,0 +1,799 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT + +/* + * Xen para-virtual DRM device + * + * Copyright (C) 2016-2018 EPAM Systems Inc. + * + * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com> + */ + +#include <linux/delay.h> +#include <linux/dma-mapping.h> +#include <linux/module.h> +#include <linux/of_device.h> + +#include <drm/drm_atomic_helper.h> +#include <drm/drm_drv.h> +#include <drm/drm_ioctl.h> +#include <drm/drm_probe_helper.h> +#include <drm/drm_file.h> +#include <drm/drm_gem.h> + +#include <xen/platform_pci.h> +#include <xen/xen.h> +#include <xen/xenbus.h> + +#include <xen/xen-front-pgdir-shbuf.h> +#include <xen/interface/io/displif.h> + +#include "xen_drm_front.h" +#include "xen_drm_front_cfg.h" +#include "xen_drm_front_evtchnl.h" +#include "xen_drm_front_gem.h" +#include "xen_drm_front_kms.h" + +struct xen_drm_front_dbuf { + struct list_head list; + u64 dbuf_cookie; + u64 fb_cookie; + + struct xen_front_pgdir_shbuf shbuf; +}; + +static void dbuf_add_to_list(struct xen_drm_front_info *front_info, + struct xen_drm_front_dbuf *dbuf, u64 dbuf_cookie) +{ + dbuf->dbuf_cookie = dbuf_cookie; + list_add(&dbuf->list, &front_info->dbuf_list); +} + +static struct xen_drm_front_dbuf *dbuf_get(struct list_head *dbuf_list, + u64 dbuf_cookie) +{ + struct xen_drm_front_dbuf *buf, *q; + + list_for_each_entry_safe(buf, q, dbuf_list, list) + if (buf->dbuf_cookie == dbuf_cookie) + return buf; + + return NULL; +} + +static void dbuf_free(struct list_head *dbuf_list, u64 dbuf_cookie) +{ + struct xen_drm_front_dbuf *buf, *q; + + list_for_each_entry_safe(buf, q, dbuf_list, list) + if (buf->dbuf_cookie == dbuf_cookie) { + list_del(&buf->list); + xen_front_pgdir_shbuf_unmap(&buf->shbuf); + xen_front_pgdir_shbuf_free(&buf->shbuf); + kfree(buf); + break; + } +} + +static void dbuf_free_all(struct list_head *dbuf_list) +{ + struct xen_drm_front_dbuf *buf, *q; + + list_for_each_entry_safe(buf, q, dbuf_list, list) { + list_del(&buf->list); + xen_front_pgdir_shbuf_unmap(&buf->shbuf); + xen_front_pgdir_shbuf_free(&buf->shbuf); + kfree(buf); + } +} + +static struct xendispl_req * +be_prepare_req(struct xen_drm_front_evtchnl *evtchnl, u8 operation) +{ + struct xendispl_req *req; + + req = RING_GET_REQUEST(&evtchnl->u.req.ring, + evtchnl->u.req.ring.req_prod_pvt); + req->operation = operation; + req->id = evtchnl->evt_next_id++; + evtchnl->evt_id = req->id; + return req; +} + +static int be_stream_do_io(struct xen_drm_front_evtchnl *evtchnl, + struct xendispl_req *req) +{ + reinit_completion(&evtchnl->u.req.completion); + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED)) + return -EIO; + + xen_drm_front_evtchnl_flush(evtchnl); + return 0; +} + +static int be_stream_wait_io(struct xen_drm_front_evtchnl *evtchnl) +{ + if (wait_for_completion_timeout(&evtchnl->u.req.completion, + msecs_to_jiffies(XEN_DRM_FRONT_WAIT_BACK_MS)) <= 0) + return -ETIMEDOUT; + + return evtchnl->u.req.resp_status; +} + +int xen_drm_front_mode_set(struct xen_drm_front_drm_pipeline *pipeline, + u32 x, u32 y, u32 width, u32 height, + u32 bpp, u64 fb_cookie) +{ + struct xen_drm_front_evtchnl *evtchnl; + struct xen_drm_front_info *front_info; + struct xendispl_req *req; + unsigned long flags; + int ret; + + front_info = pipeline->drm_info->front_info; + evtchnl = &front_info->evt_pairs[pipeline->index].req; + if (unlikely(!evtchnl)) + return -EIO; + + mutex_lock(&evtchnl->u.req.req_io_lock); + + spin_lock_irqsave(&front_info->io_lock, flags); + req = be_prepare_req(evtchnl, XENDISPL_OP_SET_CONFIG); + req->op.set_config.x = x; + req->op.set_config.y = y; + req->op.set_config.width = width; + req->op.set_config.height = height; + req->op.set_config.bpp = bpp; + req->op.set_config.fb_cookie = fb_cookie; + + ret = be_stream_do_io(evtchnl, req); + spin_unlock_irqrestore(&front_info->io_lock, flags); + + if (ret == 0) + ret = be_stream_wait_io(evtchnl); + + mutex_unlock(&evtchnl->u.req.req_io_lock); + return ret; +} + +int xen_drm_front_dbuf_create(struct xen_drm_front_info *front_info, + u64 dbuf_cookie, u32 width, u32 height, + u32 bpp, u64 size, u32 offset, + struct page **pages) +{ + struct xen_drm_front_evtchnl *evtchnl; + struct xen_drm_front_dbuf *dbuf; + struct xendispl_req *req; + struct xen_front_pgdir_shbuf_cfg buf_cfg; + unsigned long flags; + int ret; + + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req; + if (unlikely(!evtchnl)) + return -EIO; + + dbuf = kzalloc(sizeof(*dbuf), GFP_KERNEL); + if (!dbuf) + return -ENOMEM; + + dbuf_add_to_list(front_info, dbuf, dbuf_cookie); + + memset(&buf_cfg, 0, sizeof(buf_cfg)); + buf_cfg.xb_dev = front_info->xb_dev; + buf_cfg.num_pages = DIV_ROUND_UP(size, PAGE_SIZE); + buf_cfg.pages = pages; + buf_cfg.pgdir = &dbuf->shbuf; + buf_cfg.be_alloc = front_info->cfg.be_alloc; + + ret = xen_front_pgdir_shbuf_alloc(&buf_cfg); + if (ret < 0) + goto fail_shbuf_alloc; + + mutex_lock(&evtchnl->u.req.req_io_lock); + + spin_lock_irqsave(&front_info->io_lock, flags); + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_CREATE); + req->op.dbuf_create.gref_directory = + xen_front_pgdir_shbuf_get_dir_start(&dbuf->shbuf); + req->op.dbuf_create.buffer_sz = size; + req->op.dbuf_create.data_ofs = offset; + req->op.dbuf_create.dbuf_cookie = dbuf_cookie; + req->op.dbuf_create.width = width; + req->op.dbuf_create.height = height; + req->op.dbuf_create.bpp = bpp; + if (buf_cfg.be_alloc) + req->op.dbuf_create.flags |= XENDISPL_DBUF_FLG_REQ_ALLOC; + + ret = be_stream_do_io(evtchnl, req); + spin_unlock_irqrestore(&front_info->io_lock, flags); + + if (ret < 0) + goto fail; + + ret = be_stream_wait_io(evtchnl); + if (ret < 0) + goto fail; + + ret = xen_front_pgdir_shbuf_map(&dbuf->shbuf); + if (ret < 0) + goto fail; + + mutex_unlock(&evtchnl->u.req.req_io_lock); + return 0; + +fail: + mutex_unlock(&evtchnl->u.req.req_io_lock); +fail_shbuf_alloc: + dbuf_free(&front_info->dbuf_list, dbuf_cookie); + return ret; +} + +static int xen_drm_front_dbuf_destroy(struct xen_drm_front_info *front_info, + u64 dbuf_cookie) +{ + struct xen_drm_front_evtchnl *evtchnl; + struct xendispl_req *req; + unsigned long flags; + bool be_alloc; + int ret; + + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req; + if (unlikely(!evtchnl)) + return -EIO; + + be_alloc = front_info->cfg.be_alloc; + + /* + * For the backend allocated buffer release references now, so backend + * can free the buffer. + */ + if (be_alloc) + dbuf_free(&front_info->dbuf_list, dbuf_cookie); + + mutex_lock(&evtchnl->u.req.req_io_lock); + + spin_lock_irqsave(&front_info->io_lock, flags); + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_DESTROY); + req->op.dbuf_destroy.dbuf_cookie = dbuf_cookie; + + ret = be_stream_do_io(evtchnl, req); + spin_unlock_irqrestore(&front_info->io_lock, flags); + + if (ret == 0) + ret = be_stream_wait_io(evtchnl); + + /* + * Do this regardless of communication status with the backend: + * if we cannot remove remote resources remove what we can locally. + */ + if (!be_alloc) + dbuf_free(&front_info->dbuf_list, dbuf_cookie); + + mutex_unlock(&evtchnl->u.req.req_io_lock); + return ret; +} + +int xen_drm_front_fb_attach(struct xen_drm_front_info *front_info, + u64 dbuf_cookie, u64 fb_cookie, u32 width, + u32 height, u32 pixel_format) +{ + struct xen_drm_front_evtchnl *evtchnl; + struct xen_drm_front_dbuf *buf; + struct xendispl_req *req; + unsigned long flags; + int ret; + + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req; + if (unlikely(!evtchnl)) + return -EIO; + + buf = dbuf_get(&front_info->dbuf_list, dbuf_cookie); + if (!buf) + return -EINVAL; + + buf->fb_cookie = fb_cookie; + + mutex_lock(&evtchnl->u.req.req_io_lock); + + spin_lock_irqsave(&front_info->io_lock, flags); + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_ATTACH); + req->op.fb_attach.dbuf_cookie = dbuf_cookie; + req->op.fb_attach.fb_cookie = fb_cookie; + req->op.fb_attach.width = width; + req->op.fb_attach.height = height; + req->op.fb_attach.pixel_format = pixel_format; + + ret = be_stream_do_io(evtchnl, req); + spin_unlock_irqrestore(&front_info->io_lock, flags); + + if (ret == 0) + ret = be_stream_wait_io(evtchnl); + + mutex_unlock(&evtchnl->u.req.req_io_lock); + return ret; +} + +int xen_drm_front_fb_detach(struct xen_drm_front_info *front_info, + u64 fb_cookie) +{ + struct xen_drm_front_evtchnl *evtchnl; + struct xendispl_req *req; + unsigned long flags; + int ret; + + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req; + if (unlikely(!evtchnl)) + return -EIO; + + mutex_lock(&evtchnl->u.req.req_io_lock); + + spin_lock_irqsave(&front_info->io_lock, flags); + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_DETACH); + req->op.fb_detach.fb_cookie = fb_cookie; + + ret = be_stream_do_io(evtchnl, req); + spin_unlock_irqrestore(&front_info->io_lock, flags); + + if (ret == 0) + ret = be_stream_wait_io(evtchnl); + + mutex_unlock(&evtchnl->u.req.req_io_lock); + return ret; +} + +int xen_drm_front_page_flip(struct xen_drm_front_info *front_info, + int conn_idx, u64 fb_cookie) +{ + struct xen_drm_front_evtchnl *evtchnl; + struct xendispl_req *req; + unsigned long flags; + int ret; + + if (unlikely(conn_idx >= front_info->num_evt_pairs)) + return -EINVAL; + + evtchnl = &front_info->evt_pairs[conn_idx].req; + + mutex_lock(&evtchnl->u.req.req_io_lock); + + spin_lock_irqsave(&front_info->io_lock, flags); + req = be_prepare_req(evtchnl, XENDISPL_OP_PG_FLIP); + req->op.pg_flip.fb_cookie = fb_cookie; + + ret = be_stream_do_io(evtchnl, req); + spin_unlock_irqrestore(&front_info->io_lock, flags); + + if (ret == 0) + ret = be_stream_wait_io(evtchnl); + + mutex_unlock(&evtchnl->u.req.req_io_lock); + return ret; +} + +void xen_drm_front_on_frame_done(struct xen_drm_front_info *front_info, + int conn_idx, u64 fb_cookie) +{ + struct xen_drm_front_drm_info *drm_info = front_info->drm_info; + + if (unlikely(conn_idx >= front_info->cfg.num_connectors)) + return; + + xen_drm_front_kms_on_frame_done(&drm_info->pipeline[conn_idx], + fb_cookie); +} + +void xen_drm_front_gem_object_free(struct drm_gem_object *obj) +{ + struct xen_drm_front_drm_info *drm_info = obj->dev->dev_private; + int idx; + + if (drm_dev_enter(obj->dev, &idx)) { + xen_drm_front_dbuf_destroy(drm_info->front_info, + xen_drm_front_dbuf_to_cookie(obj)); + drm_dev_exit(idx); + } else { + dbuf_free(&drm_info->front_info->dbuf_list, + xen_drm_front_dbuf_to_cookie(obj)); + } + + xen_drm_front_gem_free_object_unlocked(obj); +} + +static int xen_drm_drv_dumb_create(struct drm_file *filp, + struct drm_device *dev, + struct drm_mode_create_dumb *args) +{ + struct xen_drm_front_drm_info *drm_info = dev->dev_private; + struct drm_gem_object *obj; + int ret; + + /* + * Dumb creation is a two stage process: first we create a fully + * constructed GEM object which is communicated to the backend, and + * only after that we can create GEM's handle. This is done so, + * because of the possible races: once you create a handle it becomes + * immediately visible to user-space, so the latter can try accessing + * object without pages etc. + * For details also see drm_gem_handle_create + */ + args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8); + args->size = args->pitch * args->height; + + obj = xen_drm_front_gem_create(dev, args->size); + if (IS_ERR(obj)) { + ret = PTR_ERR(obj); + goto fail; + } + + ret = xen_drm_front_dbuf_create(drm_info->front_info, + xen_drm_front_dbuf_to_cookie(obj), + args->width, args->height, args->bpp, + args->size, 0, + xen_drm_front_gem_get_pages(obj)); + if (ret) + goto fail_backend; + + /* This is the tail of GEM object creation */ + ret = drm_gem_handle_create(filp, obj, &args->handle); + if (ret) + goto fail_handle; + + /* Drop reference from allocate - handle holds it now */ + drm_gem_object_put(obj); + return 0; + +fail_handle: + xen_drm_front_dbuf_destroy(drm_info->front_info, + xen_drm_front_dbuf_to_cookie(obj)); +fail_backend: + /* drop reference from allocate */ + drm_gem_object_put(obj); +fail: + DRM_ERROR("Failed to create dumb buffer: %d\n", ret); + return ret; +} + +static void xen_drm_drv_release(struct drm_device *dev) +{ + struct xen_drm_front_drm_info *drm_info = dev->dev_private; + struct xen_drm_front_info *front_info = drm_info->front_info; + + xen_drm_front_kms_fini(drm_info); + + drm_atomic_helper_shutdown(dev); + drm_mode_config_cleanup(dev); + + if (front_info->cfg.be_alloc) + xenbus_switch_state(front_info->xb_dev, + XenbusStateInitialising); + + kfree(drm_info); +} + +DEFINE_DRM_GEM_FOPS(xen_drm_dev_fops); + +static const struct drm_driver xen_drm_driver = { + .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC, + .release = xen_drm_drv_release, + .prime_handle_to_fd = drm_gem_prime_handle_to_fd, + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, + .gem_prime_import_sg_table = xen_drm_front_gem_import_sg_table, + .gem_prime_mmap = drm_gem_prime_mmap, + .dumb_create = xen_drm_drv_dumb_create, + .fops = &xen_drm_dev_fops, + .name = "xendrm-du", + .desc = "Xen PV DRM Display Unit", + .date = "20180221", + .major = 1, + .minor = 0, + +}; + +static int xen_drm_drv_init(struct xen_drm_front_info *front_info) +{ + struct device *dev = &front_info->xb_dev->dev; + struct xen_drm_front_drm_info *drm_info; + struct drm_device *drm_dev; + int ret; + + if (drm_firmware_drivers_only()) + return -ENODEV; + + DRM_INFO("Creating %s\n", xen_drm_driver.desc); + + drm_info = kzalloc(sizeof(*drm_info), GFP_KERNEL); + if (!drm_info) { + ret = -ENOMEM; + goto fail; + } + + drm_info->front_info = front_info; + front_info->drm_info = drm_info; + + drm_dev = drm_dev_alloc(&xen_drm_driver, dev); + if (IS_ERR(drm_dev)) { + ret = PTR_ERR(drm_dev); + goto fail_dev; + } + + drm_info->drm_dev = drm_dev; + + drm_dev->dev_private = drm_info; + + ret = xen_drm_front_kms_init(drm_info); + if (ret) { + DRM_ERROR("Failed to initialize DRM/KMS, ret %d\n", ret); + goto fail_modeset; + } + + ret = drm_dev_register(drm_dev, 0); + if (ret) + goto fail_register; + + DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n", + xen_drm_driver.name, xen_drm_driver.major, + xen_drm_driver.minor, xen_drm_driver.patchlevel, + xen_drm_driver.date, drm_dev->primary->index); + + return 0; + +fail_register: + drm_dev_unregister(drm_dev); +fail_modeset: + drm_kms_helper_poll_fini(drm_dev); + drm_mode_config_cleanup(drm_dev); + drm_dev_put(drm_dev); +fail_dev: + kfree(drm_info); + front_info->drm_info = NULL; +fail: + return ret; +} + +static void xen_drm_drv_fini(struct xen_drm_front_info *front_info) +{ + struct xen_drm_front_drm_info *drm_info = front_info->drm_info; + struct drm_device *dev; + + if (!drm_info) + return; + + dev = drm_info->drm_dev; + if (!dev) + return; + + /* Nothing to do if device is already unplugged */ + if (drm_dev_is_unplugged(dev)) + return; + + drm_kms_helper_poll_fini(dev); + drm_dev_unplug(dev); + drm_dev_put(dev); + + front_info->drm_info = NULL; + + xen_drm_front_evtchnl_free_all(front_info); + dbuf_free_all(&front_info->dbuf_list); + + /* + * If we are not using backend allocated buffers, then tell the + * backend we are ready to (re)initialize. Otherwise, wait for + * drm_driver.release. + */ + if (!front_info->cfg.be_alloc) + xenbus_switch_state(front_info->xb_dev, + XenbusStateInitialising); +} + +static int displback_initwait(struct xen_drm_front_info *front_info) +{ + struct xen_drm_front_cfg *cfg = &front_info->cfg; + int ret; + + cfg->front_info = front_info; + ret = xen_drm_front_cfg_card(front_info, cfg); + if (ret < 0) + return ret; + + DRM_INFO("Have %d connector(s)\n", cfg->num_connectors); + /* Create event channels for all connectors and publish */ + ret = xen_drm_front_evtchnl_create_all(front_info); + if (ret < 0) + return ret; + + return xen_drm_front_evtchnl_publish_all(front_info); +} + +static int displback_connect(struct xen_drm_front_info *front_info) +{ + xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_CONNECTED); + return xen_drm_drv_init(front_info); +} + +static void displback_disconnect(struct xen_drm_front_info *front_info) +{ + if (!front_info->drm_info) + return; + + /* Tell the backend to wait until we release the DRM driver. */ + xenbus_switch_state(front_info->xb_dev, XenbusStateReconfiguring); + + xen_drm_drv_fini(front_info); +} + +static void displback_changed(struct xenbus_device *xb_dev, + enum xenbus_state backend_state) +{ + struct xen_drm_front_info *front_info = dev_get_drvdata(&xb_dev->dev); + int ret; + + DRM_DEBUG("Backend state is %s, front is %s\n", + xenbus_strstate(backend_state), + xenbus_strstate(xb_dev->state)); + + switch (backend_state) { + case XenbusStateReconfiguring: + case XenbusStateReconfigured: + case XenbusStateInitialised: + break; + + case XenbusStateInitialising: + if (xb_dev->state == XenbusStateReconfiguring) + break; + + /* recovering after backend unexpected closure */ + displback_disconnect(front_info); + break; + + case XenbusStateInitWait: + if (xb_dev->state == XenbusStateReconfiguring) + break; + + /* recovering after backend unexpected closure */ + displback_disconnect(front_info); + if (xb_dev->state != XenbusStateInitialising) + break; + + ret = displback_initwait(front_info); + if (ret < 0) + xenbus_dev_fatal(xb_dev, ret, "initializing frontend"); + else + xenbus_switch_state(xb_dev, XenbusStateInitialised); + break; + + case XenbusStateConnected: + if (xb_dev->state != XenbusStateInitialised) + break; + + ret = displback_connect(front_info); + if (ret < 0) { + displback_disconnect(front_info); + xenbus_dev_fatal(xb_dev, ret, "connecting backend"); + } else { + xenbus_switch_state(xb_dev, XenbusStateConnected); + } + break; + + case XenbusStateClosing: + /* + * in this state backend starts freeing resources, + * so let it go into closed state, so we can also + * remove ours + */ + break; + + case XenbusStateUnknown: + case XenbusStateClosed: + if (xb_dev->state == XenbusStateClosed) + break; + + displback_disconnect(front_info); + break; + } +} + +static int xen_drv_probe(struct xenbus_device *xb_dev, + const struct xenbus_device_id *id) +{ + struct xen_drm_front_info *front_info; + struct device *dev = &xb_dev->dev; + int ret; + + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (ret < 0) { + DRM_ERROR("Cannot setup DMA mask, ret %d", ret); + return ret; + } + + front_info = devm_kzalloc(&xb_dev->dev, + sizeof(*front_info), GFP_KERNEL); + if (!front_info) + return -ENOMEM; + + front_info->xb_dev = xb_dev; + spin_lock_init(&front_info->io_lock); + INIT_LIST_HEAD(&front_info->dbuf_list); + dev_set_drvdata(&xb_dev->dev, front_info); + + return xenbus_switch_state(xb_dev, XenbusStateInitialising); +} + +static void xen_drv_remove(struct xenbus_device *dev) +{ + struct xen_drm_front_info *front_info = dev_get_drvdata(&dev->dev); + int to = 100; + + xenbus_switch_state(dev, XenbusStateClosing); + + /* + * On driver removal it is disconnected from XenBus, + * so no backend state change events come via .otherend_changed + * callback. This prevents us from exiting gracefully, e.g. + * signaling the backend to free event channels, waiting for its + * state to change to XenbusStateClosed and cleaning at our end. + * Normally when front driver removed backend will finally go into + * XenbusStateInitWait state. + * + * Workaround: read backend's state manually and wait with time-out. + */ + while ((xenbus_read_unsigned(front_info->xb_dev->otherend, "state", + XenbusStateUnknown) != XenbusStateInitWait) && + --to) + msleep(10); + + if (!to) { + unsigned int state; + + state = xenbus_read_unsigned(front_info->xb_dev->otherend, + "state", XenbusStateUnknown); + DRM_ERROR("Backend state is %s while removing driver\n", + xenbus_strstate(state)); + } + + xen_drm_drv_fini(front_info); + xenbus_frontend_closed(dev); +} + +static const struct xenbus_device_id xen_driver_ids[] = { + { XENDISPL_DRIVER_NAME }, + { "" } +}; + +static struct xenbus_driver xen_driver = { + .ids = xen_driver_ids, + .probe = xen_drv_probe, + .remove = xen_drv_remove, + .otherend_changed = displback_changed, + .not_essential = true, +}; + +static int __init xen_drv_init(void) +{ + /* At the moment we only support case with XEN_PAGE_SIZE == PAGE_SIZE */ + if (XEN_PAGE_SIZE != PAGE_SIZE) { + DRM_ERROR(XENDISPL_DRIVER_NAME ": different kernel and Xen page sizes are not supported: XEN_PAGE_SIZE (%lu) != PAGE_SIZE (%lu)\n", + XEN_PAGE_SIZE, PAGE_SIZE); + return -ENODEV; + } + + if (!xen_domain()) + return -ENODEV; + + if (!xen_has_pv_devices()) + return -ENODEV; + + DRM_INFO("Registering XEN PV " XENDISPL_DRIVER_NAME "\n"); + return xenbus_register_frontend(&xen_driver); +} + +static void __exit xen_drv_fini(void) +{ + DRM_INFO("Unregistering XEN PV " XENDISPL_DRIVER_NAME "\n"); + xenbus_unregister_driver(&xen_driver); +} + +module_init(xen_drv_init); +module_exit(xen_drv_fini); + +MODULE_DESCRIPTION("Xen para-virtualized display device frontend"); +MODULE_LICENSE("GPL"); +MODULE_ALIAS("xen:" XENDISPL_DRIVER_NAME); |