diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/gpu/drm/nouveau/nvkm/core | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'drivers/gpu/drm/nouveau/nvkm/core')
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/Kbuild | 17 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/client.c | 187 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/engine.c | 189 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/enum.c | 56 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/event.c | 214 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/firmware.c | 239 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/gpuobj.c | 278 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/intr.c | 442 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/ioctl.c | 390 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/memory.c | 154 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/mm.c | 307 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/object.c | 336 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/oproxy.c | 227 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/option.c | 139 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/ramht.c | 162 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/subdev.c | 275 | ||||
-rw-r--r-- | drivers/gpu/drm/nouveau/nvkm/core/uevent.c | 157 |
17 files changed, 3769 insertions, 0 deletions
diff --git a/drivers/gpu/drm/nouveau/nvkm/core/Kbuild b/drivers/gpu/drm/nouveau/nvkm/core/Kbuild new file mode 100644 index 000000000..e40712023 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/Kbuild @@ -0,0 +1,17 @@ +# SPDX-License-Identifier: MIT +nvkm-y := nvkm/core/client.o +nvkm-y += nvkm/core/engine.o +nvkm-y += nvkm/core/enum.o +nvkm-y += nvkm/core/event.o +nvkm-y += nvkm/core/firmware.o +nvkm-y += nvkm/core/gpuobj.o +nvkm-y += nvkm/core/intr.o +nvkm-y += nvkm/core/ioctl.o +nvkm-y += nvkm/core/memory.o +nvkm-y += nvkm/core/mm.o +nvkm-y += nvkm/core/object.o +nvkm-y += nvkm/core/oproxy.o +nvkm-y += nvkm/core/option.o +nvkm-y += nvkm/core/ramht.o +nvkm-y += nvkm/core/subdev.o +nvkm-y += nvkm/core/uevent.o diff --git a/drivers/gpu/drm/nouveau/nvkm/core/client.c b/drivers/gpu/drm/nouveau/nvkm/core/client.c new file mode 100644 index 000000000..ebdeb8eb9 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/client.c @@ -0,0 +1,187 @@ +/* + * Copyright 2012 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Ben Skeggs + */ +#include <core/client.h> +#include <core/device.h> +#include <core/option.h> + +#include <nvif/class.h> +#include <nvif/event.h> +#include <nvif/if0000.h> +#include <nvif/unpack.h> + +static int +nvkm_uclient_new(const struct nvkm_oclass *oclass, void *argv, u32 argc, + struct nvkm_object **pobject) +{ + union { + struct nvif_client_v0 v0; + } *args = argv; + struct nvkm_client *client; + int ret = -ENOSYS; + + if (!(ret = nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, false))){ + args->v0.name[sizeof(args->v0.name) - 1] = 0; + ret = nvkm_client_new(args->v0.name, args->v0.device, NULL, + NULL, oclass->client->event, &client); + if (ret) + return ret; + } else + return ret; + + client->object.client = oclass->client; + client->object.handle = oclass->handle; + client->object.route = oclass->route; + client->object.token = oclass->token; + client->object.object = oclass->object; + client->debug = oclass->client->debug; + *pobject = &client->object; + return 0; +} + +static const struct nvkm_sclass +nvkm_uclient_sclass = { + .oclass = NVIF_CLASS_CLIENT, + .minver = 0, + .maxver = 0, + .ctor = nvkm_uclient_new, +}; + +static const struct nvkm_object_func nvkm_client; +struct nvkm_client * +nvkm_client_search(struct nvkm_client *client, u64 handle) +{ + struct nvkm_object *object; + + object = nvkm_object_search(client, handle, &nvkm_client); + if (IS_ERR(object)) + return (void *)object; + + return nvkm_client(object); +} + +static int +nvkm_client_mthd_devlist(struct nvkm_client *client, void *data, u32 size) +{ + union { + struct nvif_client_devlist_v0 v0; + } *args = data; + int ret = -ENOSYS; + + nvif_ioctl(&client->object, "client devlist size %d\n", size); + if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, true))) { + nvif_ioctl(&client->object, "client devlist vers %d count %d\n", + args->v0.version, args->v0.count); + if (size == sizeof(args->v0.device[0]) * args->v0.count) { + ret = nvkm_device_list(args->v0.device, args->v0.count); + if (ret >= 0) { + args->v0.count = ret; + ret = 0; + } + } else { + ret = -EINVAL; + } + } + + return ret; +} + +static int +nvkm_client_mthd(struct nvkm_object *object, u32 mthd, void *data, u32 size) +{ + struct nvkm_client *client = nvkm_client(object); + switch (mthd) { + case NVIF_CLIENT_V0_DEVLIST: + return nvkm_client_mthd_devlist(client, data, size); + default: + break; + } + return -EINVAL; +} + +static int +nvkm_client_child_new(const struct nvkm_oclass *oclass, + void *data, u32 size, struct nvkm_object **pobject) +{ + return oclass->base.ctor(oclass, data, size, pobject); +} + +static int +nvkm_client_child_get(struct nvkm_object *object, int index, + struct nvkm_oclass *oclass) +{ + const struct nvkm_sclass *sclass; + + switch (index) { + case 0: sclass = &nvkm_uclient_sclass; break; + case 1: sclass = &nvkm_udevice_sclass; break; + default: + return -EINVAL; + } + + oclass->ctor = nvkm_client_child_new; + oclass->base = *sclass; + return 0; +} + +static int +nvkm_client_fini(struct nvkm_object *object, bool suspend) +{ + return 0; +} + +static void * +nvkm_client_dtor(struct nvkm_object *object) +{ + return nvkm_client(object); +} + +static const struct nvkm_object_func +nvkm_client = { + .dtor = nvkm_client_dtor, + .fini = nvkm_client_fini, + .mthd = nvkm_client_mthd, + .sclass = nvkm_client_child_get, +}; + +int +nvkm_client_new(const char *name, u64 device, const char *cfg, const char *dbg, + int (*event)(u64, void *, u32), struct nvkm_client **pclient) +{ + struct nvkm_oclass oclass = { .base = nvkm_uclient_sclass }; + struct nvkm_client *client; + + if (!(client = *pclient = kzalloc(sizeof(*client), GFP_KERNEL))) + return -ENOMEM; + oclass.client = client; + + nvkm_object_ctor(&nvkm_client, &oclass, &client->object); + snprintf(client->name, sizeof(client->name), "%s", name); + client->device = device; + client->debug = nvkm_dbgopt(dbg, "CLIENT"); + client->objroot = RB_ROOT; + client->event = event; + INIT_LIST_HEAD(&client->umem); + spin_lock_init(&client->lock); + return 0; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/engine.c b/drivers/gpu/drm/nouveau/nvkm/core/engine.c new file mode 100644 index 000000000..36a31e9ee --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/engine.c @@ -0,0 +1,189 @@ +/* + * Copyright 2012 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Ben Skeggs + */ +#include <core/engine.h> +#include <core/device.h> +#include <core/option.h> + +#include <subdev/fb.h> + +bool +nvkm_engine_chsw_load(struct nvkm_engine *engine) +{ + if (engine->func->chsw_load) + return engine->func->chsw_load(engine); + return false; +} + +int +nvkm_engine_reset(struct nvkm_engine *engine) +{ + if (engine->func->reset) + return engine->func->reset(engine); + + nvkm_subdev_fini(&engine->subdev, false); + return nvkm_subdev_init(&engine->subdev); +} + +void +nvkm_engine_unref(struct nvkm_engine **pengine) +{ + struct nvkm_engine *engine = *pengine; + + if (engine) { + nvkm_subdev_unref(&engine->subdev); + *pengine = NULL; + } +} + +struct nvkm_engine * +nvkm_engine_ref(struct nvkm_engine *engine) +{ + int ret; + + if (engine) { + ret = nvkm_subdev_ref(&engine->subdev); + if (ret) + return ERR_PTR(ret); + } + + return engine; +} + +void +nvkm_engine_tile(struct nvkm_engine *engine, int region) +{ + struct nvkm_fb *fb = engine->subdev.device->fb; + if (engine->func->tile) + engine->func->tile(engine, region, &fb->tile.region[region]); +} + +static void +nvkm_engine_intr(struct nvkm_subdev *subdev) +{ + struct nvkm_engine *engine = nvkm_engine(subdev); + if (engine->func->intr) + engine->func->intr(engine); +} + +static int +nvkm_engine_info(struct nvkm_subdev *subdev, u64 mthd, u64 *data) +{ + struct nvkm_engine *engine = nvkm_engine(subdev); + + if (engine->func->info) + return engine->func->info(engine, mthd, data); + + return -ENOSYS; +} + +static int +nvkm_engine_fini(struct nvkm_subdev *subdev, bool suspend) +{ + struct nvkm_engine *engine = nvkm_engine(subdev); + if (engine->func->fini) + return engine->func->fini(engine, suspend); + return 0; +} + +static int +nvkm_engine_init(struct nvkm_subdev *subdev) +{ + struct nvkm_engine *engine = nvkm_engine(subdev); + struct nvkm_fb *fb = subdev->device->fb; + int ret = 0, i; + + if (engine->func->init) + ret = engine->func->init(engine); + + for (i = 0; fb && i < fb->tile.regions; i++) + nvkm_engine_tile(engine, i); + return ret; +} + +static int +nvkm_engine_oneinit(struct nvkm_subdev *subdev) +{ + struct nvkm_engine *engine = nvkm_engine(subdev); + + if (engine->func->oneinit) + return engine->func->oneinit(engine); + + return 0; +} + +static int +nvkm_engine_preinit(struct nvkm_subdev *subdev) +{ + struct nvkm_engine *engine = nvkm_engine(subdev); + if (engine->func->preinit) + engine->func->preinit(engine); + return 0; +} + +static void * +nvkm_engine_dtor(struct nvkm_subdev *subdev) +{ + struct nvkm_engine *engine = nvkm_engine(subdev); + if (engine->func->dtor) + return engine->func->dtor(engine); + return engine; +} + +const struct nvkm_subdev_func +nvkm_engine = { + .dtor = nvkm_engine_dtor, + .preinit = nvkm_engine_preinit, + .oneinit = nvkm_engine_oneinit, + .init = nvkm_engine_init, + .fini = nvkm_engine_fini, + .info = nvkm_engine_info, + .intr = nvkm_engine_intr, +}; + +int +nvkm_engine_ctor(const struct nvkm_engine_func *func, struct nvkm_device *device, + enum nvkm_subdev_type type, int inst, bool enable, struct nvkm_engine *engine) +{ + engine->func = func; + nvkm_subdev_ctor(&nvkm_engine, device, type, inst, &engine->subdev); + refcount_set(&engine->subdev.use.refcount, 0); + + if (!nvkm_boolopt(device->cfgopt, engine->subdev.name, enable)) { + nvkm_debug(&engine->subdev, "disabled\n"); + return -ENODEV; + } + + spin_lock_init(&engine->lock); + return 0; +} + +int +nvkm_engine_new_(const struct nvkm_engine_func *func, struct nvkm_device *device, + enum nvkm_subdev_type type, int inst, bool enable, + struct nvkm_engine **pengine) +{ + if (!(*pengine = kzalloc(sizeof(**pengine), GFP_KERNEL))) + return -ENOMEM; + return nvkm_engine_ctor(func, device, type, inst, enable, *pengine); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/enum.c b/drivers/gpu/drm/nouveau/nvkm/core/enum.c new file mode 100644 index 000000000..b9581feb2 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/enum.c @@ -0,0 +1,56 @@ +/* + * Copyright (C) 2010 Nouveau Project + * + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining + * a copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sublicense, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial + * portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + * IN NO EVENT SHALL THE COPYRIGHT OWNER(S) AND/OR ITS SUPPLIERS BE + * LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION + * OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION + * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + * + */ +#include <core/enum.h> + +const struct nvkm_enum * +nvkm_enum_find(const struct nvkm_enum *en, u32 value) +{ + while (en->name) { + if (en->value == value) + return en; + en++; + } + + return NULL; +} + +void +nvkm_snprintbf(char *data, int size, const struct nvkm_bitfield *bf, u32 value) +{ + bool space = false; + while (size >= 1 && bf->name) { + if (value & bf->mask) { + int this = snprintf(data, size, "%s%s", + space ? " " : "", bf->name); + size -= this; + data += this; + space = true; + } + bf++; + } + data[0] = '\0'; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/event.c b/drivers/gpu/drm/nouveau/nvkm/core/event.c new file mode 100644 index 000000000..a6c877135 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/event.c @@ -0,0 +1,214 @@ +/* + * Copyright 2013-2014 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include <core/event.h> +#include <core/subdev.h> + +static void +nvkm_event_put(struct nvkm_event *event, u32 types, int index) +{ + assert_spin_locked(&event->refs_lock); + + nvkm_trace(event->subdev, "event: decr %08x on %d\n", types, index); + + while (types) { + int type = __ffs(types); types &= ~(1 << type); + if (--event->refs[index * event->types_nr + type] == 0) { + nvkm_trace(event->subdev, "event: blocking %d on %d\n", type, index); + if (event->func->fini) + event->func->fini(event, 1 << type, index); + } + } +} + +static void +nvkm_event_get(struct nvkm_event *event, u32 types, int index) +{ + assert_spin_locked(&event->refs_lock); + + nvkm_trace(event->subdev, "event: incr %08x on %d\n", types, index); + + while (types) { + int type = __ffs(types); types &= ~(1 << type); + if (++event->refs[index * event->types_nr + type] == 1) { + nvkm_trace(event->subdev, "event: allowing %d on %d\n", type, index); + if (event->func->init) + event->func->init(event, 1 << type, index); + } + } +} + +static void +nvkm_event_ntfy_state(struct nvkm_event_ntfy *ntfy) +{ + struct nvkm_event *event = ntfy->event; + unsigned long flags; + + nvkm_trace(event->subdev, "event: ntfy state changed\n"); + spin_lock_irqsave(&event->refs_lock, flags); + + if (atomic_read(&ntfy->allowed) != ntfy->running) { + if (ntfy->running) { + nvkm_event_put(ntfy->event, ntfy->bits, ntfy->id); + ntfy->running = false; + } else { + nvkm_event_get(ntfy->event, ntfy->bits, ntfy->id); + ntfy->running = true; + } + } + + spin_unlock_irqrestore(&event->refs_lock, flags); +} + +static void +nvkm_event_ntfy_remove(struct nvkm_event_ntfy *ntfy) +{ + spin_lock_irq(&ntfy->event->list_lock); + list_del_init(&ntfy->head); + spin_unlock_irq(&ntfy->event->list_lock); +} + +static void +nvkm_event_ntfy_insert(struct nvkm_event_ntfy *ntfy) +{ + spin_lock_irq(&ntfy->event->list_lock); + list_add_tail(&ntfy->head, &ntfy->event->ntfy); + spin_unlock_irq(&ntfy->event->list_lock); +} + +static void +nvkm_event_ntfy_block_(struct nvkm_event_ntfy *ntfy, bool wait) +{ + struct nvkm_subdev *subdev = ntfy->event->subdev; + + nvkm_trace(subdev, "event: ntfy block %08x on %d wait:%d\n", ntfy->bits, ntfy->id, wait); + + if (atomic_xchg(&ntfy->allowed, 0) == 1) { + nvkm_event_ntfy_state(ntfy); + if (wait) + nvkm_event_ntfy_remove(ntfy); + } +} + +void +nvkm_event_ntfy_block(struct nvkm_event_ntfy *ntfy) +{ + if (ntfy->event) + nvkm_event_ntfy_block_(ntfy, ntfy->wait); +} + +void +nvkm_event_ntfy_allow(struct nvkm_event_ntfy *ntfy) +{ + nvkm_trace(ntfy->event->subdev, "event: ntfy allow %08x on %d\n", ntfy->bits, ntfy->id); + + if (atomic_xchg(&ntfy->allowed, 1) == 0) { + nvkm_event_ntfy_state(ntfy); + if (ntfy->wait) + nvkm_event_ntfy_insert(ntfy); + } +} + +void +nvkm_event_ntfy_del(struct nvkm_event_ntfy *ntfy) +{ + struct nvkm_event *event = ntfy->event; + + if (!event) + return; + + nvkm_trace(event->subdev, "event: ntfy del %08x on %d\n", ntfy->bits, ntfy->id); + + nvkm_event_ntfy_block_(ntfy, false); + nvkm_event_ntfy_remove(ntfy); + ntfy->event = NULL; +} + +void +nvkm_event_ntfy_add(struct nvkm_event *event, int id, u32 bits, bool wait, nvkm_event_func func, + struct nvkm_event_ntfy *ntfy) +{ + nvkm_trace(event->subdev, "event: ntfy add %08x on %d wait:%d\n", id, bits, wait); + + ntfy->event = event; + ntfy->id = id; + ntfy->bits = bits; + ntfy->wait = wait; + ntfy->func = func; + atomic_set(&ntfy->allowed, 0); + ntfy->running = false; + INIT_LIST_HEAD(&ntfy->head); + if (!ntfy->wait) + nvkm_event_ntfy_insert(ntfy); +} + +bool +nvkm_event_ntfy_valid(struct nvkm_event *event, int id, u32 bits) +{ + return true; +} + +void +nvkm_event_ntfy(struct nvkm_event *event, int id, u32 bits) +{ + struct nvkm_event_ntfy *ntfy, *ntmp; + unsigned long flags; + + if (!event->refs || WARN_ON(id >= event->index_nr)) + return; + + nvkm_trace(event->subdev, "event: ntfy %08x on %d\n", bits, id); + spin_lock_irqsave(&event->list_lock, flags); + + list_for_each_entry_safe(ntfy, ntmp, &event->ntfy, head) { + if (ntfy->id == id && ntfy->bits & bits) { + if (atomic_read(&ntfy->allowed)) + ntfy->func(ntfy, ntfy->bits & bits); + } + } + + spin_unlock_irqrestore(&event->list_lock, flags); +} + +void +nvkm_event_fini(struct nvkm_event *event) +{ + if (event->refs) { + kfree(event->refs); + event->refs = NULL; + } +} + +int +__nvkm_event_init(const struct nvkm_event_func *func, struct nvkm_subdev *subdev, + int types_nr, int index_nr, struct nvkm_event *event) +{ + event->refs = kzalloc(array3_size(index_nr, types_nr, sizeof(*event->refs)), GFP_KERNEL); + if (!event->refs) + return -ENOMEM; + + event->func = func; + event->subdev = subdev; + event->types_nr = types_nr; + event->index_nr = index_nr; + INIT_LIST_HEAD(&event->ntfy); + return 0; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/firmware.c b/drivers/gpu/drm/nouveau/nvkm/core/firmware.c new file mode 100644 index 000000000..91fb494d4 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/firmware.c @@ -0,0 +1,239 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ +#include <core/device.h> +#include <core/firmware.h> + +#include <subdev/fb.h> +#include <subdev/mmu.h> + +int +nvkm_firmware_load_name(const struct nvkm_subdev *subdev, const char *base, + const char *name, int ver, const struct firmware **pfw) +{ + char path[64]; + int ret; + + snprintf(path, sizeof(path), "%s%s", base, name); + ret = nvkm_firmware_get(subdev, path, ver, pfw); + if (ret < 0) + return ret; + + return 0; +} + +int +nvkm_firmware_load_blob(const struct nvkm_subdev *subdev, const char *base, + const char *name, int ver, struct nvkm_blob *blob) +{ + const struct firmware *fw; + int ret; + + ret = nvkm_firmware_load_name(subdev, base, name, ver, &fw); + if (ret == 0) { + blob->data = kmemdup(fw->data, fw->size, GFP_KERNEL); + blob->size = fw->size; + nvkm_firmware_put(fw); + if (!blob->data) + return -ENOMEM; + } + + return ret; +} + +/** + * nvkm_firmware_get - load firmware from the official nvidia/chip/ directory + * @subdev: subdevice that will use that firmware + * @fwname: name of firmware file to load + * @ver: firmware version to load + * @fw: firmware structure to load to + * + * Use this function to load firmware files in the form nvidia/chip/fwname.bin. + * Firmware files released by NVIDIA will always follow this format. + */ +int +nvkm_firmware_get(const struct nvkm_subdev *subdev, const char *fwname, int ver, + const struct firmware **fw) +{ + struct nvkm_device *device = subdev->device; + char f[64]; + char cname[16]; + int i; + + /* Convert device name to lowercase */ + strncpy(cname, device->chip->name, sizeof(cname)); + cname[sizeof(cname) - 1] = '\0'; + i = strlen(cname); + while (i) { + --i; + cname[i] = tolower(cname[i]); + } + + if (ver != 0) + snprintf(f, sizeof(f), "nvidia/%s/%s-%d.bin", cname, fwname, ver); + else + snprintf(f, sizeof(f), "nvidia/%s/%s.bin", cname, fwname); + + if (!firmware_request_nowarn(fw, f, device->dev)) { + nvkm_debug(subdev, "firmware \"%s\" loaded - %zu byte(s)\n", + f, (*fw)->size); + return 0; + } + + nvkm_debug(subdev, "firmware \"%s\" unavailable\n", f); + return -ENOENT; +} + +/* + * nvkm_firmware_put - release firmware loaded with nvkm_firmware_get + */ +void +nvkm_firmware_put(const struct firmware *fw) +{ + release_firmware(fw); +} + +#define nvkm_firmware_mem(p) container_of((p), struct nvkm_firmware, mem.memory) + +static int +nvkm_firmware_mem_map(struct nvkm_memory *memory, u64 offset, struct nvkm_vmm *vmm, + struct nvkm_vma *vma, void *argv, u32 argc) +{ + struct nvkm_firmware *fw = nvkm_firmware_mem(memory); + struct nvkm_vmm_map map = { + .memory = &fw->mem.memory, + .offset = offset, + .sgl = &fw->mem.sgl, + }; + + if (WARN_ON(fw->func->type != NVKM_FIRMWARE_IMG_DMA)) + return -ENOSYS; + + return nvkm_vmm_map(vmm, vma, argv, argc, &map); +} + +static u64 +nvkm_firmware_mem_size(struct nvkm_memory *memory) +{ + return sg_dma_len(&nvkm_firmware_mem(memory)->mem.sgl); +} + +static u64 +nvkm_firmware_mem_addr(struct nvkm_memory *memory) +{ + return nvkm_firmware_mem(memory)->phys; +} + +static u8 +nvkm_firmware_mem_page(struct nvkm_memory *memory) +{ + return PAGE_SHIFT; +} + +static enum nvkm_memory_target +nvkm_firmware_mem_target(struct nvkm_memory *memory) +{ + if (nvkm_firmware_mem(memory)->device->func->tegra) + return NVKM_MEM_TARGET_NCOH; + + return NVKM_MEM_TARGET_HOST; +} + +static void * +nvkm_firmware_mem_dtor(struct nvkm_memory *memory) +{ + return NULL; +} + +static const struct nvkm_memory_func +nvkm_firmware_mem = { + .dtor = nvkm_firmware_mem_dtor, + .target = nvkm_firmware_mem_target, + .page = nvkm_firmware_mem_page, + .addr = nvkm_firmware_mem_addr, + .size = nvkm_firmware_mem_size, + .map = nvkm_firmware_mem_map, +}; + +void +nvkm_firmware_dtor(struct nvkm_firmware *fw) +{ + struct nvkm_memory *memory = &fw->mem.memory; + + if (!fw->img) + return; + + switch (fw->func->type) { + case NVKM_FIRMWARE_IMG_RAM: + kfree(fw->img); + break; + case NVKM_FIRMWARE_IMG_DMA: + nvkm_memory_unref(&memory); + dma_free_coherent(fw->device->dev, sg_dma_len(&fw->mem.sgl), fw->img, fw->phys); + break; + default: + WARN_ON(1); + break; + } + + fw->img = NULL; +} + +int +nvkm_firmware_ctor(const struct nvkm_firmware_func *func, const char *name, + struct nvkm_device *device, const void *src, int len, struct nvkm_firmware *fw) +{ + fw->func = func; + fw->name = name; + fw->device = device; + fw->len = len; + + switch (fw->func->type) { + case NVKM_FIRMWARE_IMG_RAM: + fw->img = kmemdup(src, fw->len, GFP_KERNEL); + break; + case NVKM_FIRMWARE_IMG_DMA: { + dma_addr_t addr; + + len = ALIGN(fw->len, PAGE_SIZE); + + fw->img = dma_alloc_coherent(fw->device->dev, len, &addr, GFP_KERNEL); + if (fw->img) { + memcpy(fw->img, src, fw->len); + fw->phys = addr; + } + + sg_init_one(&fw->mem.sgl, fw->img, len); + sg_dma_address(&fw->mem.sgl) = fw->phys; + sg_dma_len(&fw->mem.sgl) = len; + } + break; + default: + WARN_ON(1); + return -EINVAL; + } + + if (!fw->img) + return -ENOMEM; + + nvkm_memory_ctor(&nvkm_firmware_mem, &fw->mem.memory); + return 0; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/gpuobj.c b/drivers/gpu/drm/nouveau/nvkm/core/gpuobj.c new file mode 100644 index 000000000..d6de2b3ed --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/gpuobj.c @@ -0,0 +1,278 @@ +/* + * Copyright 2012 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Ben Skeggs + */ +#include <core/gpuobj.h> +#include <core/engine.h> + +#include <subdev/instmem.h> +#include <subdev/bar.h> +#include <subdev/mmu.h> + +/* fast-path, where backend is able to provide direct pointer to memory */ +static u32 +nvkm_gpuobj_rd32_fast(struct nvkm_gpuobj *gpuobj, u32 offset) +{ + return ioread32_native(gpuobj->map + offset); +} + +static void +nvkm_gpuobj_wr32_fast(struct nvkm_gpuobj *gpuobj, u32 offset, u32 data) +{ + iowrite32_native(data, gpuobj->map + offset); +} + +/* accessor functions for gpuobjs allocated directly from instmem */ +static int +nvkm_gpuobj_heap_map(struct nvkm_gpuobj *gpuobj, u64 offset, + struct nvkm_vmm *vmm, struct nvkm_vma *vma, + void *argv, u32 argc) +{ + return nvkm_memory_map(gpuobj->memory, offset, vmm, vma, argv, argc); +} + +static u32 +nvkm_gpuobj_heap_rd32(struct nvkm_gpuobj *gpuobj, u32 offset) +{ + return nvkm_ro32(gpuobj->memory, offset); +} + +static void +nvkm_gpuobj_heap_wr32(struct nvkm_gpuobj *gpuobj, u32 offset, u32 data) +{ + nvkm_wo32(gpuobj->memory, offset, data); +} + +static const struct nvkm_gpuobj_func nvkm_gpuobj_heap; +static void +nvkm_gpuobj_heap_release(struct nvkm_gpuobj *gpuobj) +{ + gpuobj->func = &nvkm_gpuobj_heap; + nvkm_done(gpuobj->memory); +} + +static const struct nvkm_gpuobj_func +nvkm_gpuobj_heap_fast = { + .release = nvkm_gpuobj_heap_release, + .rd32 = nvkm_gpuobj_rd32_fast, + .wr32 = nvkm_gpuobj_wr32_fast, + .map = nvkm_gpuobj_heap_map, +}; + +static const struct nvkm_gpuobj_func +nvkm_gpuobj_heap_slow = { + .release = nvkm_gpuobj_heap_release, + .rd32 = nvkm_gpuobj_heap_rd32, + .wr32 = nvkm_gpuobj_heap_wr32, + .map = nvkm_gpuobj_heap_map, +}; + +static void * +nvkm_gpuobj_heap_acquire(struct nvkm_gpuobj *gpuobj) +{ + gpuobj->map = nvkm_kmap(gpuobj->memory); + if (likely(gpuobj->map)) + gpuobj->func = &nvkm_gpuobj_heap_fast; + else + gpuobj->func = &nvkm_gpuobj_heap_slow; + return gpuobj->map; +} + +static const struct nvkm_gpuobj_func +nvkm_gpuobj_heap = { + .acquire = nvkm_gpuobj_heap_acquire, + .map = nvkm_gpuobj_heap_map, +}; + +/* accessor functions for gpuobjs sub-allocated from a parent gpuobj */ +static int +nvkm_gpuobj_map(struct nvkm_gpuobj *gpuobj, u64 offset, + struct nvkm_vmm *vmm, struct nvkm_vma *vma, + void *argv, u32 argc) +{ + return nvkm_memory_map(gpuobj->parent, gpuobj->node->offset + offset, + vmm, vma, argv, argc); +} + +static u32 +nvkm_gpuobj_rd32(struct nvkm_gpuobj *gpuobj, u32 offset) +{ + return nvkm_ro32(gpuobj->parent, gpuobj->node->offset + offset); +} + +static void +nvkm_gpuobj_wr32(struct nvkm_gpuobj *gpuobj, u32 offset, u32 data) +{ + nvkm_wo32(gpuobj->parent, gpuobj->node->offset + offset, data); +} + +static const struct nvkm_gpuobj_func nvkm_gpuobj_func; +static void +nvkm_gpuobj_release(struct nvkm_gpuobj *gpuobj) +{ + gpuobj->func = &nvkm_gpuobj_func; + nvkm_done(gpuobj->parent); +} + +static const struct nvkm_gpuobj_func +nvkm_gpuobj_fast = { + .release = nvkm_gpuobj_release, + .rd32 = nvkm_gpuobj_rd32_fast, + .wr32 = nvkm_gpuobj_wr32_fast, + .map = nvkm_gpuobj_map, +}; + +static const struct nvkm_gpuobj_func +nvkm_gpuobj_slow = { + .release = nvkm_gpuobj_release, + .rd32 = nvkm_gpuobj_rd32, + .wr32 = nvkm_gpuobj_wr32, + .map = nvkm_gpuobj_map, +}; + +static void * +nvkm_gpuobj_acquire(struct nvkm_gpuobj *gpuobj) +{ + gpuobj->map = nvkm_kmap(gpuobj->parent); + if (likely(gpuobj->map)) { + gpuobj->map = (u8 *)gpuobj->map + gpuobj->node->offset; + gpuobj->func = &nvkm_gpuobj_fast; + } else { + gpuobj->func = &nvkm_gpuobj_slow; + } + return gpuobj->map; +} + +static const struct nvkm_gpuobj_func +nvkm_gpuobj_func = { + .acquire = nvkm_gpuobj_acquire, + .map = nvkm_gpuobj_map, +}; + +static int +nvkm_gpuobj_ctor(struct nvkm_device *device, u32 size, int align, bool zero, + struct nvkm_gpuobj *parent, struct nvkm_gpuobj *gpuobj) +{ + u32 offset; + int ret; + + if (parent) { + if (align >= 0) { + ret = nvkm_mm_head(&parent->heap, 0, 1, size, size, + max(align, 1), &gpuobj->node); + } else { + ret = nvkm_mm_tail(&parent->heap, 0, 1, size, size, + -align, &gpuobj->node); + } + if (ret) + return ret; + + gpuobj->parent = parent; + gpuobj->func = &nvkm_gpuobj_func; + gpuobj->addr = parent->addr + gpuobj->node->offset; + gpuobj->size = gpuobj->node->length; + + if (zero) { + nvkm_kmap(gpuobj); + for (offset = 0; offset < gpuobj->size; offset += 4) + nvkm_wo32(gpuobj, offset, 0x00000000); + nvkm_done(gpuobj); + } + } else { + ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, size, + abs(align), zero, &gpuobj->memory); + if (ret) + return ret; + + gpuobj->func = &nvkm_gpuobj_heap; + gpuobj->addr = nvkm_memory_addr(gpuobj->memory); + gpuobj->size = nvkm_memory_size(gpuobj->memory); + } + + return nvkm_mm_init(&gpuobj->heap, 0, 0, gpuobj->size, 1); +} + +void +nvkm_gpuobj_del(struct nvkm_gpuobj **pgpuobj) +{ + struct nvkm_gpuobj *gpuobj = *pgpuobj; + if (gpuobj) { + if (gpuobj->parent) + nvkm_mm_free(&gpuobj->parent->heap, &gpuobj->node); + nvkm_mm_fini(&gpuobj->heap); + nvkm_memory_unref(&gpuobj->memory); + kfree(*pgpuobj); + *pgpuobj = NULL; + } +} + +int +nvkm_gpuobj_new(struct nvkm_device *device, u32 size, int align, bool zero, + struct nvkm_gpuobj *parent, struct nvkm_gpuobj **pgpuobj) +{ + struct nvkm_gpuobj *gpuobj; + int ret; + + if (!(gpuobj = *pgpuobj = kzalloc(sizeof(*gpuobj), GFP_KERNEL))) + return -ENOMEM; + + ret = nvkm_gpuobj_ctor(device, size, align, zero, parent, gpuobj); + if (ret) + nvkm_gpuobj_del(pgpuobj); + return ret; +} + +/* the below is basically only here to support sharing the paged dma object + * for PCI(E)GART on <=nv4x chipsets, and should *not* be expected to work + * anywhere else. + */ + +int +nvkm_gpuobj_wrap(struct nvkm_memory *memory, struct nvkm_gpuobj **pgpuobj) +{ + if (!(*pgpuobj = kzalloc(sizeof(**pgpuobj), GFP_KERNEL))) + return -ENOMEM; + + (*pgpuobj)->addr = nvkm_memory_addr(memory); + (*pgpuobj)->size = nvkm_memory_size(memory); + return 0; +} + +void +nvkm_gpuobj_memcpy_to(struct nvkm_gpuobj *dst, u32 dstoffset, void *src, + u32 length) +{ + int i; + + for (i = 0; i < length; i += 4) + nvkm_wo32(dst, dstoffset + i, *(u32 *)(src + i)); +} + +void +nvkm_gpuobj_memcpy_from(void *dst, struct nvkm_gpuobj *src, u32 srcoffset, + u32 length) +{ + int i; + + for (i = 0; i < length; i += 4) + ((u32 *)src)[i / 4] = nvkm_ro32(src, srcoffset + i); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/intr.c b/drivers/gpu/drm/nouveau/nvkm/core/intr.c new file mode 100644 index 000000000..e20b7ca21 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/intr.c @@ -0,0 +1,442 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include <core/intr.h> +#include <core/device.h> +#include <core/subdev.h> +#include <subdev/pci.h> +#include <subdev/top.h> + +static int +nvkm_intr_xlat(struct nvkm_subdev *subdev, struct nvkm_intr *intr, + enum nvkm_intr_type type, int *leaf, u32 *mask) +{ + struct nvkm_device *device = subdev->device; + + if (type < NVKM_INTR_VECTOR_0) { + if (type == NVKM_INTR_SUBDEV) { + const struct nvkm_intr_data *data = intr->data; + struct nvkm_top_device *tdev; + + while (data && data->mask) { + if (data->type == NVKM_SUBDEV_TOP) { + list_for_each_entry(tdev, &device->top->device, head) { + if (tdev->intr >= 0 && + tdev->type == subdev->type && + tdev->inst == subdev->inst) { + if (data->mask & BIT(tdev->intr)) { + *leaf = data->leaf; + *mask = BIT(tdev->intr); + return 0; + } + } + } + } else + if (data->type == subdev->type && data->inst == subdev->inst) { + *leaf = data->leaf; + *mask = data->mask; + return 0; + } + + data++; + } + } else { + return -ENOSYS; + } + } else { + if (type < intr->leaves * sizeof(*intr->stat) * 8) { + *leaf = type / 32; + *mask = BIT(type % 32); + return 0; + } + } + + return -EINVAL; +} + +static struct nvkm_intr * +nvkm_intr_find(struct nvkm_subdev *subdev, enum nvkm_intr_type type, int *leaf, u32 *mask) +{ + struct nvkm_intr *intr; + int ret; + + list_for_each_entry(intr, &subdev->device->intr.intr, head) { + ret = nvkm_intr_xlat(subdev, intr, type, leaf, mask); + if (ret == 0) + return intr; + } + + return NULL; +} + +static void +nvkm_intr_allow_locked(struct nvkm_intr *intr, int leaf, u32 mask) +{ + intr->mask[leaf] |= mask; + if (intr->func->allow) { + if (intr->func->reset) + intr->func->reset(intr, leaf, mask); + intr->func->allow(intr, leaf, mask); + } +} + +void +nvkm_intr_allow(struct nvkm_subdev *subdev, enum nvkm_intr_type type) +{ + struct nvkm_device *device = subdev->device; + struct nvkm_intr *intr; + unsigned long flags; + int leaf; + u32 mask; + + intr = nvkm_intr_find(subdev, type, &leaf, &mask); + if (intr) { + nvkm_debug(intr->subdev, "intr %d/%08x allowed by %s\n", leaf, mask, subdev->name); + spin_lock_irqsave(&device->intr.lock, flags); + nvkm_intr_allow_locked(intr, leaf, mask); + spin_unlock_irqrestore(&device->intr.lock, flags); + } +} + +static void +nvkm_intr_block_locked(struct nvkm_intr *intr, int leaf, u32 mask) +{ + intr->mask[leaf] &= ~mask; + if (intr->func->block) + intr->func->block(intr, leaf, mask); +} + +void +nvkm_intr_block(struct nvkm_subdev *subdev, enum nvkm_intr_type type) +{ + struct nvkm_device *device = subdev->device; + struct nvkm_intr *intr; + unsigned long flags; + int leaf; + u32 mask; + + intr = nvkm_intr_find(subdev, type, &leaf, &mask); + if (intr) { + nvkm_debug(intr->subdev, "intr %d/%08x blocked by %s\n", leaf, mask, subdev->name); + spin_lock_irqsave(&device->intr.lock, flags); + nvkm_intr_block_locked(intr, leaf, mask); + spin_unlock_irqrestore(&device->intr.lock, flags); + } +} + +static void +nvkm_intr_rearm_locked(struct nvkm_device *device) +{ + struct nvkm_intr *intr; + + list_for_each_entry(intr, &device->intr.intr, head) + intr->func->rearm(intr); +} + +static void +nvkm_intr_unarm_locked(struct nvkm_device *device) +{ + struct nvkm_intr *intr; + + list_for_each_entry(intr, &device->intr.intr, head) + intr->func->unarm(intr); +} + +static irqreturn_t +nvkm_intr(int irq, void *arg) +{ + struct nvkm_device *device = arg; + struct nvkm_intr *intr; + struct nvkm_inth *inth; + irqreturn_t ret = IRQ_NONE; + bool pending = false; + int prio, leaf; + + /* Disable all top-level interrupt sources, and re-arm MSI interrupts. */ + spin_lock(&device->intr.lock); + if (!device->intr.armed) + goto done_unlock; + + nvkm_intr_unarm_locked(device); + nvkm_pci_msi_rearm(device); + + /* Fetch pending interrupt masks. */ + list_for_each_entry(intr, &device->intr.intr, head) { + if (intr->func->pending(intr)) + pending = true; + } + + if (!pending) + goto done; + + /* Check that GPU is still on the bus by reading NV_PMC_BOOT_0. */ + if (WARN_ON(nvkm_rd32(device, 0x000000) == 0xffffffff)) + goto done; + + /* Execute handlers. */ + for (prio = 0; prio < ARRAY_SIZE(device->intr.prio); prio++) { + list_for_each_entry(inth, &device->intr.prio[prio], head) { + struct nvkm_intr *intr = inth->intr; + + if (intr->stat[inth->leaf] & inth->mask) { + if (atomic_read(&inth->allowed)) { + if (intr->func->reset) + intr->func->reset(intr, inth->leaf, inth->mask); + if (inth->func(inth) == IRQ_HANDLED) + ret = IRQ_HANDLED; + } + } + } + } + + /* Nothing handled? Some debugging/protection from IRQ storms is in order... */ + if (ret == IRQ_NONE) { + list_for_each_entry(intr, &device->intr.intr, head) { + for (leaf = 0; leaf < intr->leaves; leaf++) { + if (intr->stat[leaf]) { + nvkm_warn(intr->subdev, "intr%d: %08x\n", + leaf, intr->stat[leaf]); + nvkm_intr_block_locked(intr, leaf, intr->stat[leaf]); + } + } + } + } + +done: + /* Re-enable all top-level interrupt sources. */ + nvkm_intr_rearm_locked(device); +done_unlock: + spin_unlock(&device->intr.lock); + return ret; +} + +int +nvkm_intr_add(const struct nvkm_intr_func *func, const struct nvkm_intr_data *data, + struct nvkm_subdev *subdev, int leaves, struct nvkm_intr *intr) +{ + struct nvkm_device *device = subdev->device; + int i; + + intr->func = func; + intr->data = data; + intr->subdev = subdev; + intr->leaves = leaves; + intr->stat = kcalloc(leaves, sizeof(*intr->stat), GFP_KERNEL); + intr->mask = kcalloc(leaves, sizeof(*intr->mask), GFP_KERNEL); + if (!intr->stat || !intr->mask) { + kfree(intr->stat); + return -ENOMEM; + } + + if (intr->subdev->debug >= NV_DBG_DEBUG) { + for (i = 0; i < intr->leaves; i++) + intr->mask[i] = ~0; + } + + spin_lock_irq(&device->intr.lock); + list_add_tail(&intr->head, &device->intr.intr); + spin_unlock_irq(&device->intr.lock); + return 0; +} + +static irqreturn_t +nvkm_intr_subdev(struct nvkm_inth *inth) +{ + struct nvkm_subdev *subdev = container_of(inth, typeof(*subdev), inth); + + nvkm_subdev_intr(subdev); + return IRQ_HANDLED; +} + +static void +nvkm_intr_subdev_add_dev(struct nvkm_intr *intr, enum nvkm_subdev_type type, int inst) +{ + struct nvkm_subdev *subdev; + enum nvkm_intr_prio prio; + int ret; + + subdev = nvkm_device_subdev(intr->subdev->device, type, inst); + if (!subdev || !subdev->func->intr) + return; + + if (type == NVKM_ENGINE_DISP) + prio = NVKM_INTR_PRIO_VBLANK; + else + prio = NVKM_INTR_PRIO_NORMAL; + + ret = nvkm_inth_add(intr, NVKM_INTR_SUBDEV, prio, subdev, nvkm_intr_subdev, &subdev->inth); + if (WARN_ON(ret)) + return; + + nvkm_inth_allow(&subdev->inth); +} + +static void +nvkm_intr_subdev_add(struct nvkm_intr *intr) +{ + const struct nvkm_intr_data *data; + struct nvkm_device *device = intr->subdev->device; + struct nvkm_top_device *tdev; + + for (data = intr->data; data && data->mask; data++) { + if (data->legacy) { + if (data->type == NVKM_SUBDEV_TOP) { + list_for_each_entry(tdev, &device->top->device, head) { + if (tdev->intr < 0 || !(data->mask & BIT(tdev->intr))) + continue; + + nvkm_intr_subdev_add_dev(intr, tdev->type, tdev->inst); + } + } else { + nvkm_intr_subdev_add_dev(intr, data->type, data->inst); + } + } + } +} + +void +nvkm_intr_rearm(struct nvkm_device *device) +{ + struct nvkm_intr *intr; + int i; + + if (unlikely(!device->intr.legacy_done)) { + list_for_each_entry(intr, &device->intr.intr, head) + nvkm_intr_subdev_add(intr); + device->intr.legacy_done = true; + } + + spin_lock_irq(&device->intr.lock); + list_for_each_entry(intr, &device->intr.intr, head) { + for (i = 0; intr->func->block && i < intr->leaves; i++) { + intr->func->block(intr, i, ~0); + intr->func->allow(intr, i, intr->mask[i]); + } + } + + nvkm_intr_rearm_locked(device); + device->intr.armed = true; + spin_unlock_irq(&device->intr.lock); +} + +void +nvkm_intr_unarm(struct nvkm_device *device) +{ + spin_lock_irq(&device->intr.lock); + nvkm_intr_unarm_locked(device); + device->intr.armed = false; + spin_unlock_irq(&device->intr.lock); +} + +int +nvkm_intr_install(struct nvkm_device *device) +{ + int ret; + + device->intr.irq = device->func->irq(device); + if (device->intr.irq < 0) + return device->intr.irq; + + ret = request_irq(device->intr.irq, nvkm_intr, IRQF_SHARED, "nvkm", device); + if (ret) + return ret; + + device->intr.alloc = true; + return 0; +} + +void +nvkm_intr_dtor(struct nvkm_device *device) +{ + struct nvkm_intr *intr, *intt; + + list_for_each_entry_safe(intr, intt, &device->intr.intr, head) { + list_del(&intr->head); + kfree(intr->mask); + kfree(intr->stat); + } + + if (device->intr.alloc) + free_irq(device->intr.irq, device); +} + +void +nvkm_intr_ctor(struct nvkm_device *device) +{ + int i; + + INIT_LIST_HEAD(&device->intr.intr); + for (i = 0; i < ARRAY_SIZE(device->intr.prio); i++) + INIT_LIST_HEAD(&device->intr.prio[i]); + + spin_lock_init(&device->intr.lock); + device->intr.armed = false; +} + +void +nvkm_inth_block(struct nvkm_inth *inth) +{ + if (unlikely(!inth->intr)) + return; + + atomic_set(&inth->allowed, 0); +} + +void +nvkm_inth_allow(struct nvkm_inth *inth) +{ + struct nvkm_intr *intr = inth->intr; + unsigned long flags; + + if (unlikely(!inth->intr)) + return; + + spin_lock_irqsave(&intr->subdev->device->intr.lock, flags); + if (!atomic_xchg(&inth->allowed, 1)) { + if ((intr->mask[inth->leaf] & inth->mask) != inth->mask) + nvkm_intr_allow_locked(intr, inth->leaf, inth->mask); + } + spin_unlock_irqrestore(&intr->subdev->device->intr.lock, flags); +} + +int +nvkm_inth_add(struct nvkm_intr *intr, enum nvkm_intr_type type, enum nvkm_intr_prio prio, + struct nvkm_subdev *subdev, nvkm_inth_func func, struct nvkm_inth *inth) +{ + struct nvkm_device *device = subdev->device; + int ret; + + if (WARN_ON(inth->mask)) + return -EBUSY; + + ret = nvkm_intr_xlat(subdev, intr, type, &inth->leaf, &inth->mask); + if (ret) + return ret; + + nvkm_debug(intr->subdev, "intr %d/%08x requested by %s\n", + inth->leaf, inth->mask, subdev->name); + + inth->intr = intr; + inth->func = func; + atomic_set(&inth->allowed, 0); + list_add_tail(&inth->head, &device->intr.prio[prio]); + return 0; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/ioctl.c b/drivers/gpu/drm/nouveau/nvkm/core/ioctl.c new file mode 100644 index 000000000..0b33287e4 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/ioctl.c @@ -0,0 +1,390 @@ +/* + * Copyright 2014 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Ben Skeggs <bskeggs@redhat.com> + */ +#include <core/ioctl.h> +#include <core/client.h> +#include <core/engine.h> +#include <core/event.h> + +#include <nvif/unpack.h> +#include <nvif/ioctl.h> + +static int +nvkm_ioctl_nop(struct nvkm_client *client, + struct nvkm_object *object, void *data, u32 size) +{ + union { + struct nvif_ioctl_nop_v0 v0; + } *args = data; + int ret = -ENOSYS; + + nvif_ioctl(object, "nop size %d\n", size); + if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { + nvif_ioctl(object, "nop vers %lld\n", args->v0.version); + args->v0.version = NVIF_VERSION_LATEST; + } + + return ret; +} + +#include <nvif/class.h> + +static int +nvkm_ioctl_sclass_(struct nvkm_object *object, int index, struct nvkm_oclass *oclass) +{ + if ( object->func->uevent && + !object->func->uevent(object, NULL, 0, NULL) && index-- == 0) { + oclass->ctor = nvkm_uevent_new; + oclass->base.minver = 0; + oclass->base.maxver = 0; + oclass->base.oclass = NVIF_CLASS_EVENT; + return 0; + } + + if (object->func->sclass) + return object->func->sclass(object, index, oclass); + + return -ENOSYS; +} + +static int +nvkm_ioctl_sclass(struct nvkm_client *client, + struct nvkm_object *object, void *data, u32 size) +{ + union { + struct nvif_ioctl_sclass_v0 v0; + } *args = data; + struct nvkm_oclass oclass = { .client = client }; + int ret = -ENOSYS, i = 0; + + nvif_ioctl(object, "sclass size %d\n", size); + if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, true))) { + nvif_ioctl(object, "sclass vers %d count %d\n", + args->v0.version, args->v0.count); + if (size != args->v0.count * sizeof(args->v0.oclass[0])) + return -EINVAL; + + while (nvkm_ioctl_sclass_(object, i, &oclass) >= 0) { + if (i < args->v0.count) { + args->v0.oclass[i].oclass = oclass.base.oclass; + args->v0.oclass[i].minver = oclass.base.minver; + args->v0.oclass[i].maxver = oclass.base.maxver; + } + i++; + } + + args->v0.count = i; + } + + return ret; +} + +static int +nvkm_ioctl_new(struct nvkm_client *client, + struct nvkm_object *parent, void *data, u32 size) +{ + union { + struct nvif_ioctl_new_v0 v0; + } *args = data; + struct nvkm_object *object = NULL; + struct nvkm_oclass oclass; + int ret = -ENOSYS, i = 0; + + nvif_ioctl(parent, "new size %d\n", size); + if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, true))) { + nvif_ioctl(parent, "new vers %d handle %08x class %08x " + "route %02x token %llx object %016llx\n", + args->v0.version, args->v0.handle, args->v0.oclass, + args->v0.route, args->v0.token, args->v0.object); + } else + return ret; + + if (!parent->func->sclass && !parent->func->uevent) { + nvif_ioctl(parent, "cannot have children\n"); + return -EINVAL; + } + + do { + memset(&oclass, 0x00, sizeof(oclass)); + oclass.handle = args->v0.handle; + oclass.route = args->v0.route; + oclass.token = args->v0.token; + oclass.object = args->v0.object; + oclass.client = client; + oclass.parent = parent; + ret = nvkm_ioctl_sclass_(parent, i++, &oclass); + if (ret) + return ret; + } while (oclass.base.oclass != args->v0.oclass); + + if (oclass.engine) { + oclass.engine = nvkm_engine_ref(oclass.engine); + if (IS_ERR(oclass.engine)) + return PTR_ERR(oclass.engine); + } + + ret = oclass.ctor(&oclass, data, size, &object); + nvkm_engine_unref(&oclass.engine); + if (ret == 0) { + ret = nvkm_object_init(object); + if (ret == 0) { + list_add_tail(&object->head, &parent->tree); + if (nvkm_object_insert(object)) { + client->data = object; + return 0; + } + ret = -EEXIST; + } + nvkm_object_fini(object, false); + } + + nvkm_object_del(&object); + return ret; +} + +static int +nvkm_ioctl_del(struct nvkm_client *client, + struct nvkm_object *object, void *data, u32 size) +{ + union { + struct nvif_ioctl_del none; + } *args = data; + int ret = -ENOSYS; + + nvif_ioctl(object, "delete size %d\n", size); + if (!(ret = nvif_unvers(ret, &data, &size, args->none))) { + nvif_ioctl(object, "delete\n"); + nvkm_object_fini(object, false); + nvkm_object_del(&object); + } + + return ret ? ret : 1; +} + +static int +nvkm_ioctl_mthd(struct nvkm_client *client, + struct nvkm_object *object, void *data, u32 size) +{ + union { + struct nvif_ioctl_mthd_v0 v0; + } *args = data; + int ret = -ENOSYS; + + nvif_ioctl(object, "mthd size %d\n", size); + if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, true))) { + nvif_ioctl(object, "mthd vers %d mthd %02x\n", + args->v0.version, args->v0.method); + ret = nvkm_object_mthd(object, args->v0.method, data, size); + } + + return ret; +} + + +static int +nvkm_ioctl_rd(struct nvkm_client *client, + struct nvkm_object *object, void *data, u32 size) +{ + union { + struct nvif_ioctl_rd_v0 v0; + } *args = data; + union { + u8 b08; + u16 b16; + u32 b32; + } v; + int ret = -ENOSYS; + + nvif_ioctl(object, "rd size %d\n", size); + if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { + nvif_ioctl(object, "rd vers %d size %d addr %016llx\n", + args->v0.version, args->v0.size, args->v0.addr); + switch (args->v0.size) { + case 1: + ret = nvkm_object_rd08(object, args->v0.addr, &v.b08); + args->v0.data = v.b08; + break; + case 2: + ret = nvkm_object_rd16(object, args->v0.addr, &v.b16); + args->v0.data = v.b16; + break; + case 4: + ret = nvkm_object_rd32(object, args->v0.addr, &v.b32); + args->v0.data = v.b32; + break; + default: + ret = -EINVAL; + break; + } + } + + return ret; +} + +static int +nvkm_ioctl_wr(struct nvkm_client *client, + struct nvkm_object *object, void *data, u32 size) +{ + union { + struct nvif_ioctl_wr_v0 v0; + } *args = data; + int ret = -ENOSYS; + + nvif_ioctl(object, "wr size %d\n", size); + if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { + nvif_ioctl(object, + "wr vers %d size %d addr %016llx data %08x\n", + args->v0.version, args->v0.size, args->v0.addr, + args->v0.data); + } else + return ret; + + switch (args->v0.size) { + case 1: return nvkm_object_wr08(object, args->v0.addr, args->v0.data); + case 2: return nvkm_object_wr16(object, args->v0.addr, args->v0.data); + case 4: return nvkm_object_wr32(object, args->v0.addr, args->v0.data); + default: + break; + } + + return -EINVAL; +} + +static int +nvkm_ioctl_map(struct nvkm_client *client, + struct nvkm_object *object, void *data, u32 size) +{ + union { + struct nvif_ioctl_map_v0 v0; + } *args = data; + enum nvkm_object_map type; + int ret = -ENOSYS; + + nvif_ioctl(object, "map size %d\n", size); + if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, true))) { + nvif_ioctl(object, "map vers %d\n", args->v0.version); + ret = nvkm_object_map(object, data, size, &type, + &args->v0.handle, + &args->v0.length); + if (type == NVKM_OBJECT_MAP_IO) + args->v0.type = NVIF_IOCTL_MAP_V0_IO; + else + args->v0.type = NVIF_IOCTL_MAP_V0_VA; + } + + return ret; +} + +static int +nvkm_ioctl_unmap(struct nvkm_client *client, + struct nvkm_object *object, void *data, u32 size) +{ + union { + struct nvif_ioctl_unmap none; + } *args = data; + int ret = -ENOSYS; + + nvif_ioctl(object, "unmap size %d\n", size); + if (!(ret = nvif_unvers(ret, &data, &size, args->none))) { + nvif_ioctl(object, "unmap\n"); + ret = nvkm_object_unmap(object); + } + + return ret; +} + +static struct { + int version; + int (*func)(struct nvkm_client *, struct nvkm_object *, void *, u32); +} +nvkm_ioctl_v0[] = { + { 0x00, nvkm_ioctl_nop }, + { 0x00, nvkm_ioctl_sclass }, + { 0x00, nvkm_ioctl_new }, + { 0x00, nvkm_ioctl_del }, + { 0x00, nvkm_ioctl_mthd }, + { 0x00, nvkm_ioctl_rd }, + { 0x00, nvkm_ioctl_wr }, + { 0x00, nvkm_ioctl_map }, + { 0x00, nvkm_ioctl_unmap }, +}; + +static int +nvkm_ioctl_path(struct nvkm_client *client, u64 handle, u32 type, + void *data, u32 size, u8 owner, u8 *route, u64 *token) +{ + struct nvkm_object *object; + int ret; + + object = nvkm_object_search(client, handle, NULL); + if (IS_ERR(object)) { + nvif_ioctl(&client->object, "object not found\n"); + return PTR_ERR(object); + } + + if (owner != NVIF_IOCTL_V0_OWNER_ANY && owner != object->route) { + nvif_ioctl(&client->object, "route != owner\n"); + return -EACCES; + } + *route = object->route; + *token = object->token; + + if (ret = -EINVAL, type < ARRAY_SIZE(nvkm_ioctl_v0)) { + if (nvkm_ioctl_v0[type].version == 0) + ret = nvkm_ioctl_v0[type].func(client, object, data, size); + } + + return ret; +} + +int +nvkm_ioctl(struct nvkm_client *client, void *data, u32 size, void **hack) +{ + struct nvkm_object *object = &client->object; + union { + struct nvif_ioctl_v0 v0; + } *args = data; + int ret = -ENOSYS; + + nvif_ioctl(object, "size %d\n", size); + + if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, true))) { + nvif_ioctl(object, + "vers %d type %02x object %016llx owner %02x\n", + args->v0.version, args->v0.type, args->v0.object, + args->v0.owner); + ret = nvkm_ioctl_path(client, args->v0.object, args->v0.type, + data, size, args->v0.owner, + &args->v0.route, &args->v0.token); + } + + if (ret != 1) { + nvif_ioctl(object, "return %d\n", ret); + if (hack) { + *hack = client->data; + client->data = NULL; + } + } + + return ret; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/memory.c b/drivers/gpu/drm/nouveau/nvkm/core/memory.c new file mode 100644 index 000000000..c69daac9b --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/memory.c @@ -0,0 +1,154 @@ +/* + * Copyright 2015 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Ben Skeggs <bskeggs@redhat.com> + */ +#include <core/memory.h> +#include <core/mm.h> +#include <subdev/fb.h> +#include <subdev/instmem.h> + +void +nvkm_memory_tags_put(struct nvkm_memory *memory, struct nvkm_device *device, + struct nvkm_tags **ptags) +{ + struct nvkm_fb *fb = device->fb; + struct nvkm_tags *tags = *ptags; + if (tags) { + mutex_lock(&fb->tags.mutex); + if (refcount_dec_and_test(&tags->refcount)) { + nvkm_mm_free(&fb->tags.mm, &tags->mn); + kfree(memory->tags); + memory->tags = NULL; + } + mutex_unlock(&fb->tags.mutex); + *ptags = NULL; + } +} + +int +nvkm_memory_tags_get(struct nvkm_memory *memory, struct nvkm_device *device, + u32 nr, void (*clr)(struct nvkm_device *, u32, u32), + struct nvkm_tags **ptags) +{ + struct nvkm_fb *fb = device->fb; + struct nvkm_tags *tags; + + mutex_lock(&fb->tags.mutex); + if ((tags = memory->tags)) { + /* If comptags exist for the memory, but a different amount + * than requested, the buffer is being mapped with settings + * that are incompatible with existing mappings. + */ + if (tags->mn && tags->mn->length != nr) { + mutex_unlock(&fb->tags.mutex); + return -EINVAL; + } + + refcount_inc(&tags->refcount); + mutex_unlock(&fb->tags.mutex); + *ptags = tags; + return 0; + } + + if (!(tags = kmalloc(sizeof(*tags), GFP_KERNEL))) { + mutex_unlock(&fb->tags.mutex); + return -ENOMEM; + } + + if (!nvkm_mm_head(&fb->tags.mm, 0, 1, nr, nr, 1, &tags->mn)) { + if (clr) + clr(device, tags->mn->offset, tags->mn->length); + } else { + /* Failure to allocate HW comptags is not an error, the + * caller should fall back to an uncompressed map. + * + * As memory can be mapped in multiple places, we still + * need to track the allocation failure and ensure that + * any additional mappings remain uncompressed. + * + * This is handled by returning an empty nvkm_tags. + */ + tags->mn = NULL; + } + + refcount_set(&tags->refcount, 1); + *ptags = memory->tags = tags; + mutex_unlock(&fb->tags.mutex); + return 0; +} + +void +nvkm_memory_ctor(const struct nvkm_memory_func *func, + struct nvkm_memory *memory) +{ + memory->func = func; + kref_init(&memory->kref); +} + +static void +nvkm_memory_del(struct kref *kref) +{ + struct nvkm_memory *memory = container_of(kref, typeof(*memory), kref); + if (!WARN_ON(!memory->func)) { + if (memory->func->dtor) + memory = memory->func->dtor(memory); + kfree(memory); + } +} + +void +nvkm_memory_unref(struct nvkm_memory **pmemory) +{ + struct nvkm_memory *memory = *pmemory; + if (memory) { + kref_put(&memory->kref, nvkm_memory_del); + *pmemory = NULL; + } +} + +struct nvkm_memory * +nvkm_memory_ref(struct nvkm_memory *memory) +{ + if (memory) + kref_get(&memory->kref); + return memory; +} + +int +nvkm_memory_new(struct nvkm_device *device, enum nvkm_memory_target target, + u64 size, u32 align, bool zero, + struct nvkm_memory **pmemory) +{ + struct nvkm_instmem *imem = device->imem; + struct nvkm_memory *memory; + int ret; + + if (unlikely(target != NVKM_MEM_TARGET_INST || !imem)) + return -ENOSYS; + + ret = nvkm_instobj_new(imem, size, align, zero, &memory); + if (ret) + return ret; + + *pmemory = memory; + return 0; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/mm.c b/drivers/gpu/drm/nouveau/nvkm/core/mm.c new file mode 100644 index 000000000..f78a06a6b --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/mm.c @@ -0,0 +1,307 @@ +/* + * Copyright 2012 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Ben Skeggs + */ +#include <core/mm.h> + +#define node(root, dir) ((root)->nl_entry.dir == &mm->nodes) ? NULL : \ + list_entry((root)->nl_entry.dir, struct nvkm_mm_node, nl_entry) + +void +nvkm_mm_dump(struct nvkm_mm *mm, const char *header) +{ + struct nvkm_mm_node *node; + + pr_err("nvkm: %s\n", header); + pr_err("nvkm: node list:\n"); + list_for_each_entry(node, &mm->nodes, nl_entry) { + pr_err("nvkm: \t%08x %08x %d\n", + node->offset, node->length, node->type); + } + pr_err("nvkm: free list:\n"); + list_for_each_entry(node, &mm->free, fl_entry) { + pr_err("nvkm: \t%08x %08x %d\n", + node->offset, node->length, node->type); + } +} + +void +nvkm_mm_free(struct nvkm_mm *mm, struct nvkm_mm_node **pthis) +{ + struct nvkm_mm_node *this = *pthis; + + if (this) { + struct nvkm_mm_node *prev = node(this, prev); + struct nvkm_mm_node *next = node(this, next); + + if (prev && prev->type == NVKM_MM_TYPE_NONE) { + prev->length += this->length; + list_del(&this->nl_entry); + kfree(this); this = prev; + } + + if (next && next->type == NVKM_MM_TYPE_NONE) { + next->offset = this->offset; + next->length += this->length; + if (this->type == NVKM_MM_TYPE_NONE) + list_del(&this->fl_entry); + list_del(&this->nl_entry); + kfree(this); this = NULL; + } + + if (this && this->type != NVKM_MM_TYPE_NONE) { + list_for_each_entry(prev, &mm->free, fl_entry) { + if (this->offset < prev->offset) + break; + } + + list_add_tail(&this->fl_entry, &prev->fl_entry); + this->type = NVKM_MM_TYPE_NONE; + } + } + + *pthis = NULL; +} + +static struct nvkm_mm_node * +region_head(struct nvkm_mm *mm, struct nvkm_mm_node *a, u32 size) +{ + struct nvkm_mm_node *b; + + if (a->length == size) + return a; + + b = kmalloc(sizeof(*b), GFP_KERNEL); + if (unlikely(b == NULL)) + return NULL; + + b->offset = a->offset; + b->length = size; + b->heap = a->heap; + b->type = a->type; + a->offset += size; + a->length -= size; + list_add_tail(&b->nl_entry, &a->nl_entry); + if (b->type == NVKM_MM_TYPE_NONE) + list_add_tail(&b->fl_entry, &a->fl_entry); + + return b; +} + +int +nvkm_mm_head(struct nvkm_mm *mm, u8 heap, u8 type, u32 size_max, u32 size_min, + u32 align, struct nvkm_mm_node **pnode) +{ + struct nvkm_mm_node *prev, *this, *next; + u32 mask = align - 1; + u32 splitoff; + u32 s, e; + + BUG_ON(type == NVKM_MM_TYPE_NONE || type == NVKM_MM_TYPE_HOLE); + + list_for_each_entry(this, &mm->free, fl_entry) { + if (unlikely(heap != NVKM_MM_HEAP_ANY)) { + if (this->heap != heap) + continue; + } + e = this->offset + this->length; + s = this->offset; + + prev = node(this, prev); + if (prev && prev->type != type) + s = roundup(s, mm->block_size); + + next = node(this, next); + if (next && next->type != type) + e = rounddown(e, mm->block_size); + + s = (s + mask) & ~mask; + e &= ~mask; + if (s > e || e - s < size_min) + continue; + + splitoff = s - this->offset; + if (splitoff && !region_head(mm, this, splitoff)) + return -ENOMEM; + + this = region_head(mm, this, min(size_max, e - s)); + if (!this) + return -ENOMEM; + + this->next = NULL; + this->type = type; + list_del(&this->fl_entry); + *pnode = this; + return 0; + } + + return -ENOSPC; +} + +static struct nvkm_mm_node * +region_tail(struct nvkm_mm *mm, struct nvkm_mm_node *a, u32 size) +{ + struct nvkm_mm_node *b; + + if (a->length == size) + return a; + + b = kmalloc(sizeof(*b), GFP_KERNEL); + if (unlikely(b == NULL)) + return NULL; + + a->length -= size; + b->offset = a->offset + a->length; + b->length = size; + b->heap = a->heap; + b->type = a->type; + + list_add(&b->nl_entry, &a->nl_entry); + if (b->type == NVKM_MM_TYPE_NONE) + list_add(&b->fl_entry, &a->fl_entry); + + return b; +} + +int +nvkm_mm_tail(struct nvkm_mm *mm, u8 heap, u8 type, u32 size_max, u32 size_min, + u32 align, struct nvkm_mm_node **pnode) +{ + struct nvkm_mm_node *prev, *this, *next; + u32 mask = align - 1; + + BUG_ON(type == NVKM_MM_TYPE_NONE || type == NVKM_MM_TYPE_HOLE); + + list_for_each_entry_reverse(this, &mm->free, fl_entry) { + u32 e = this->offset + this->length; + u32 s = this->offset; + u32 c = 0, a; + if (unlikely(heap != NVKM_MM_HEAP_ANY)) { + if (this->heap != heap) + continue; + } + + prev = node(this, prev); + if (prev && prev->type != type) + s = roundup(s, mm->block_size); + + next = node(this, next); + if (next && next->type != type) { + e = rounddown(e, mm->block_size); + c = next->offset - e; + } + + s = (s + mask) & ~mask; + a = e - s; + if (s > e || a < size_min) + continue; + + a = min(a, size_max); + s = (e - a) & ~mask; + c += (e - s) - a; + + if (c && !region_tail(mm, this, c)) + return -ENOMEM; + + this = region_tail(mm, this, a); + if (!this) + return -ENOMEM; + + this->next = NULL; + this->type = type; + list_del(&this->fl_entry); + *pnode = this; + return 0; + } + + return -ENOSPC; +} + +int +nvkm_mm_init(struct nvkm_mm *mm, u8 heap, u32 offset, u32 length, u32 block) +{ + struct nvkm_mm_node *node, *prev; + u32 next; + + if (nvkm_mm_initialised(mm)) { + prev = list_last_entry(&mm->nodes, typeof(*node), nl_entry); + next = prev->offset + prev->length; + if (next != offset) { + BUG_ON(next > offset); + if (!(node = kzalloc(sizeof(*node), GFP_KERNEL))) + return -ENOMEM; + node->type = NVKM_MM_TYPE_HOLE; + node->offset = next; + node->length = offset - next; + list_add_tail(&node->nl_entry, &mm->nodes); + } + BUG_ON(block != mm->block_size); + } else { + INIT_LIST_HEAD(&mm->nodes); + INIT_LIST_HEAD(&mm->free); + mm->block_size = block; + mm->heap_nodes = 0; + } + + node = kzalloc(sizeof(*node), GFP_KERNEL); + if (!node) + return -ENOMEM; + + if (length) { + node->offset = roundup(offset, mm->block_size); + node->length = rounddown(offset + length, mm->block_size); + node->length -= node->offset; + } + + list_add_tail(&node->nl_entry, &mm->nodes); + list_add_tail(&node->fl_entry, &mm->free); + node->heap = heap; + mm->heap_nodes++; + return 0; +} + +int +nvkm_mm_fini(struct nvkm_mm *mm) +{ + struct nvkm_mm_node *node, *temp; + int nodes = 0; + + if (!nvkm_mm_initialised(mm)) + return 0; + + list_for_each_entry(node, &mm->nodes, nl_entry) { + if (node->type != NVKM_MM_TYPE_HOLE) { + if (++nodes > mm->heap_nodes) { + nvkm_mm_dump(mm, "mm not clean!"); + return -EBUSY; + } + } + } + + list_for_each_entry_safe(node, temp, &mm->nodes, nl_entry) { + list_del(&node->nl_entry); + kfree(node); + } + + mm->heap_nodes = 0; + return 0; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/object.c b/drivers/gpu/drm/nouveau/nvkm/core/object.c new file mode 100644 index 000000000..301a5e5b5 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/object.c @@ -0,0 +1,336 @@ +/* + * Copyright 2012 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Ben Skeggs + */ +#include <core/object.h> +#include <core/client.h> +#include <core/engine.h> + +struct nvkm_object * +nvkm_object_search(struct nvkm_client *client, u64 handle, + const struct nvkm_object_func *func) +{ + struct nvkm_object *object; + + if (handle) { + struct rb_node *node = client->objroot.rb_node; + while (node) { + object = rb_entry(node, typeof(*object), node); + if (handle < object->object) + node = node->rb_left; + else + if (handle > object->object) + node = node->rb_right; + else + goto done; + } + return ERR_PTR(-ENOENT); + } else { + object = &client->object; + } + +done: + if (unlikely(func && object->func != func)) + return ERR_PTR(-EINVAL); + return object; +} + +void +nvkm_object_remove(struct nvkm_object *object) +{ + if (!RB_EMPTY_NODE(&object->node)) + rb_erase(&object->node, &object->client->objroot); +} + +bool +nvkm_object_insert(struct nvkm_object *object) +{ + struct rb_node **ptr = &object->client->objroot.rb_node; + struct rb_node *parent = NULL; + + while (*ptr) { + struct nvkm_object *this = rb_entry(*ptr, typeof(*this), node); + parent = *ptr; + if (object->object < this->object) + ptr = &parent->rb_left; + else + if (object->object > this->object) + ptr = &parent->rb_right; + else + return false; + } + + rb_link_node(&object->node, parent, ptr); + rb_insert_color(&object->node, &object->client->objroot); + return true; +} + +int +nvkm_object_mthd(struct nvkm_object *object, u32 mthd, void *data, u32 size) +{ + if (likely(object->func->mthd)) + return object->func->mthd(object, mthd, data, size); + return -ENODEV; +} + +int +nvkm_object_ntfy(struct nvkm_object *object, u32 mthd, + struct nvkm_event **pevent) +{ + if (likely(object->func->ntfy)) + return object->func->ntfy(object, mthd, pevent); + return -ENODEV; +} + +int +nvkm_object_map(struct nvkm_object *object, void *argv, u32 argc, + enum nvkm_object_map *type, u64 *addr, u64 *size) +{ + if (likely(object->func->map)) + return object->func->map(object, argv, argc, type, addr, size); + return -ENODEV; +} + +int +nvkm_object_unmap(struct nvkm_object *object) +{ + if (likely(object->func->unmap)) + return object->func->unmap(object); + return -ENODEV; +} + +int +nvkm_object_rd08(struct nvkm_object *object, u64 addr, u8 *data) +{ + if (likely(object->func->rd08)) + return object->func->rd08(object, addr, data); + return -ENODEV; +} + +int +nvkm_object_rd16(struct nvkm_object *object, u64 addr, u16 *data) +{ + if (likely(object->func->rd16)) + return object->func->rd16(object, addr, data); + return -ENODEV; +} + +int +nvkm_object_rd32(struct nvkm_object *object, u64 addr, u32 *data) +{ + if (likely(object->func->rd32)) + return object->func->rd32(object, addr, data); + return -ENODEV; +} + +int +nvkm_object_wr08(struct nvkm_object *object, u64 addr, u8 data) +{ + if (likely(object->func->wr08)) + return object->func->wr08(object, addr, data); + return -ENODEV; +} + +int +nvkm_object_wr16(struct nvkm_object *object, u64 addr, u16 data) +{ + if (likely(object->func->wr16)) + return object->func->wr16(object, addr, data); + return -ENODEV; +} + +int +nvkm_object_wr32(struct nvkm_object *object, u64 addr, u32 data) +{ + if (likely(object->func->wr32)) + return object->func->wr32(object, addr, data); + return -ENODEV; +} + +int +nvkm_object_bind(struct nvkm_object *object, struct nvkm_gpuobj *gpuobj, + int align, struct nvkm_gpuobj **pgpuobj) +{ + if (object->func->bind) + return object->func->bind(object, gpuobj, align, pgpuobj); + return -ENODEV; +} + +int +nvkm_object_fini(struct nvkm_object *object, bool suspend) +{ + const char *action = suspend ? "suspend" : "fini"; + struct nvkm_object *child; + s64 time; + int ret; + + nvif_debug(object, "%s children...\n", action); + time = ktime_to_us(ktime_get()); + list_for_each_entry(child, &object->tree, head) { + ret = nvkm_object_fini(child, suspend); + if (ret && suspend) + goto fail_child; + } + + nvif_debug(object, "%s running...\n", action); + if (object->func->fini) { + ret = object->func->fini(object, suspend); + if (ret) { + nvif_error(object, "%s failed with %d\n", action, ret); + if (suspend) + goto fail; + } + } + + time = ktime_to_us(ktime_get()) - time; + nvif_debug(object, "%s completed in %lldus\n", action, time); + return 0; + +fail: + if (object->func->init) { + int rret = object->func->init(object); + if (rret) + nvif_fatal(object, "failed to restart, %d\n", rret); + } +fail_child: + list_for_each_entry_continue_reverse(child, &object->tree, head) { + nvkm_object_init(child); + } + return ret; +} + +int +nvkm_object_init(struct nvkm_object *object) +{ + struct nvkm_object *child; + s64 time; + int ret; + + nvif_debug(object, "init running...\n"); + time = ktime_to_us(ktime_get()); + if (object->func->init) { + ret = object->func->init(object); + if (ret) + goto fail; + } + + nvif_debug(object, "init children...\n"); + list_for_each_entry(child, &object->tree, head) { + ret = nvkm_object_init(child); + if (ret) + goto fail_child; + } + + time = ktime_to_us(ktime_get()) - time; + nvif_debug(object, "init completed in %lldus\n", time); + return 0; + +fail_child: + list_for_each_entry_continue_reverse(child, &object->tree, head) + nvkm_object_fini(child, false); +fail: + nvif_error(object, "init failed with %d\n", ret); + if (object->func->fini) + object->func->fini(object, false); + return ret; +} + +void * +nvkm_object_dtor(struct nvkm_object *object) +{ + struct nvkm_object *child, *ctemp; + void *data = object; + s64 time; + + nvif_debug(object, "destroy children...\n"); + time = ktime_to_us(ktime_get()); + list_for_each_entry_safe(child, ctemp, &object->tree, head) { + nvkm_object_del(&child); + } + + nvif_debug(object, "destroy running...\n"); + nvkm_object_unmap(object); + if (object->func->dtor) + data = object->func->dtor(object); + nvkm_engine_unref(&object->engine); + time = ktime_to_us(ktime_get()) - time; + nvif_debug(object, "destroy completed in %lldus...\n", time); + return data; +} + +void +nvkm_object_del(struct nvkm_object **pobject) +{ + struct nvkm_object *object = *pobject; + if (object && !WARN_ON(!object->func)) { + *pobject = nvkm_object_dtor(object); + nvkm_object_remove(object); + list_del(&object->head); + kfree(*pobject); + *pobject = NULL; + } +} + +void +nvkm_object_ctor(const struct nvkm_object_func *func, + const struct nvkm_oclass *oclass, struct nvkm_object *object) +{ + object->func = func; + object->client = oclass->client; + object->engine = nvkm_engine_ref(oclass->engine); + object->oclass = oclass->base.oclass; + object->handle = oclass->handle; + object->route = oclass->route; + object->token = oclass->token; + object->object = oclass->object; + INIT_LIST_HEAD(&object->head); + INIT_LIST_HEAD(&object->tree); + RB_CLEAR_NODE(&object->node); + WARN_ON(IS_ERR(object->engine)); +} + +int +nvkm_object_new_(const struct nvkm_object_func *func, + const struct nvkm_oclass *oclass, void *data, u32 size, + struct nvkm_object **pobject) +{ + if (size == 0) { + if (!(*pobject = kzalloc(sizeof(**pobject), GFP_KERNEL))) + return -ENOMEM; + nvkm_object_ctor(func, oclass, *pobject); + return 0; + } + return -ENOSYS; +} + +static const struct nvkm_object_func +nvkm_object_func = { +}; + +int +nvkm_object_new(const struct nvkm_oclass *oclass, void *data, u32 size, + struct nvkm_object **pobject) +{ + const struct nvkm_object_func *func = + oclass->base.func ? oclass->base.func : &nvkm_object_func; + return nvkm_object_new_(func, oclass, data, size, pobject); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/oproxy.c b/drivers/gpu/drm/nouveau/nvkm/core/oproxy.c new file mode 100644 index 000000000..3385528da --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/oproxy.c @@ -0,0 +1,227 @@ +/* + * Copyright 2015 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Ben Skeggs <bskeggs@redhat.com> + */ +#include <core/oproxy.h> + +static int +nvkm_oproxy_mthd(struct nvkm_object *object, u32 mthd, void *data, u32 size) +{ + return nvkm_object_mthd(nvkm_oproxy(object)->object, mthd, data, size); +} + +static int +nvkm_oproxy_ntfy(struct nvkm_object *object, u32 mthd, + struct nvkm_event **pevent) +{ + return nvkm_object_ntfy(nvkm_oproxy(object)->object, mthd, pevent); +} + +static int +nvkm_oproxy_map(struct nvkm_object *object, void *argv, u32 argc, + enum nvkm_object_map *type, u64 *addr, u64 *size) +{ + struct nvkm_oproxy *oproxy = nvkm_oproxy(object); + return nvkm_object_map(oproxy->object, argv, argc, type, addr, size); +} + +static int +nvkm_oproxy_unmap(struct nvkm_object *object) +{ + struct nvkm_oproxy *oproxy = nvkm_oproxy(object); + + if (unlikely(!oproxy->object)) + return 0; + + return nvkm_object_unmap(oproxy->object); +} + +static int +nvkm_oproxy_rd08(struct nvkm_object *object, u64 addr, u8 *data) +{ + return nvkm_object_rd08(nvkm_oproxy(object)->object, addr, data); +} + +static int +nvkm_oproxy_rd16(struct nvkm_object *object, u64 addr, u16 *data) +{ + return nvkm_object_rd16(nvkm_oproxy(object)->object, addr, data); +} + +static int +nvkm_oproxy_rd32(struct nvkm_object *object, u64 addr, u32 *data) +{ + return nvkm_object_rd32(nvkm_oproxy(object)->object, addr, data); +} + +static int +nvkm_oproxy_wr08(struct nvkm_object *object, u64 addr, u8 data) +{ + return nvkm_object_wr08(nvkm_oproxy(object)->object, addr, data); +} + +static int +nvkm_oproxy_wr16(struct nvkm_object *object, u64 addr, u16 data) +{ + return nvkm_object_wr16(nvkm_oproxy(object)->object, addr, data); +} + +static int +nvkm_oproxy_wr32(struct nvkm_object *object, u64 addr, u32 data) +{ + return nvkm_object_wr32(nvkm_oproxy(object)->object, addr, data); +} + +static int +nvkm_oproxy_bind(struct nvkm_object *object, struct nvkm_gpuobj *parent, + int align, struct nvkm_gpuobj **pgpuobj) +{ + return nvkm_object_bind(nvkm_oproxy(object)->object, + parent, align, pgpuobj); +} + +static int +nvkm_oproxy_sclass(struct nvkm_object *object, int index, + struct nvkm_oclass *oclass) +{ + struct nvkm_oproxy *oproxy = nvkm_oproxy(object); + oclass->parent = oproxy->object; + if (!oproxy->object->func->sclass) + return -ENODEV; + return oproxy->object->func->sclass(oproxy->object, index, oclass); +} + +static int +nvkm_oproxy_uevent(struct nvkm_object *object, void *argv, u32 argc, + struct nvkm_uevent *uevent) +{ + struct nvkm_oproxy *oproxy = nvkm_oproxy(object); + + if (!oproxy->object->func->uevent) + return -ENOSYS; + + return oproxy->object->func->uevent(oproxy->object, argv, argc, uevent); +} + +static int +nvkm_oproxy_fini(struct nvkm_object *object, bool suspend) +{ + struct nvkm_oproxy *oproxy = nvkm_oproxy(object); + int ret; + + if (oproxy->func->fini[0]) { + ret = oproxy->func->fini[0](oproxy, suspend); + if (ret && suspend) + return ret; + } + + if (oproxy->object->func->fini) { + ret = oproxy->object->func->fini(oproxy->object, suspend); + if (ret && suspend) + return ret; + } + + if (oproxy->func->fini[1]) { + ret = oproxy->func->fini[1](oproxy, suspend); + if (ret && suspend) + return ret; + } + + return 0; +} + +static int +nvkm_oproxy_init(struct nvkm_object *object) +{ + struct nvkm_oproxy *oproxy = nvkm_oproxy(object); + int ret; + + if (oproxy->func->init[0]) { + ret = oproxy->func->init[0](oproxy); + if (ret) + return ret; + } + + if (oproxy->object->func->init) { + ret = oproxy->object->func->init(oproxy->object); + if (ret) + return ret; + } + + if (oproxy->func->init[1]) { + ret = oproxy->func->init[1](oproxy); + if (ret) + return ret; + } + + return 0; +} + +static void * +nvkm_oproxy_dtor(struct nvkm_object *object) +{ + struct nvkm_oproxy *oproxy = nvkm_oproxy(object); + if (oproxy->func->dtor[0]) + oproxy->func->dtor[0](oproxy); + nvkm_object_del(&oproxy->object); + if (oproxy->func->dtor[1]) + oproxy->func->dtor[1](oproxy); + return oproxy; +} + +static const struct nvkm_object_func +nvkm_oproxy_func = { + .dtor = nvkm_oproxy_dtor, + .init = nvkm_oproxy_init, + .fini = nvkm_oproxy_fini, + .mthd = nvkm_oproxy_mthd, + .ntfy = nvkm_oproxy_ntfy, + .map = nvkm_oproxy_map, + .unmap = nvkm_oproxy_unmap, + .rd08 = nvkm_oproxy_rd08, + .rd16 = nvkm_oproxy_rd16, + .rd32 = nvkm_oproxy_rd32, + .wr08 = nvkm_oproxy_wr08, + .wr16 = nvkm_oproxy_wr16, + .wr32 = nvkm_oproxy_wr32, + .bind = nvkm_oproxy_bind, + .sclass = nvkm_oproxy_sclass, + .uevent = nvkm_oproxy_uevent, +}; + +void +nvkm_oproxy_ctor(const struct nvkm_oproxy_func *func, + const struct nvkm_oclass *oclass, struct nvkm_oproxy *oproxy) +{ + nvkm_object_ctor(&nvkm_oproxy_func, oclass, &oproxy->base); + oproxy->func = func; +} + +int +nvkm_oproxy_new_(const struct nvkm_oproxy_func *func, + const struct nvkm_oclass *oclass, struct nvkm_oproxy **poproxy) +{ + if (!(*poproxy = kzalloc(sizeof(**poproxy), GFP_KERNEL))) + return -ENOMEM; + nvkm_oproxy_ctor(func, oclass, *poproxy); + return 0; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/option.c b/drivers/gpu/drm/nouveau/nvkm/core/option.c new file mode 100644 index 000000000..3e62cf8cd --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/option.c @@ -0,0 +1,139 @@ +/* + * Copyright 2012 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Ben Skeggs + */ +#include <core/option.h> +#include <core/debug.h> + +const char * +nvkm_stropt(const char *optstr, const char *opt, int *arglen) +{ + while (optstr && *optstr != '\0') { + int len = strcspn(optstr, ",="); + switch (optstr[len]) { + case '=': + if (!strncasecmpz(optstr, opt, len)) { + optstr += len + 1; + *arglen = strcspn(optstr, ",="); + return *arglen ? optstr : NULL; + } + optstr++; + break; + case ',': + optstr++; + break; + default: + break; + } + optstr += len; + } + + return NULL; +} + +bool +nvkm_boolopt(const char *optstr, const char *opt, bool value) +{ + int arglen; + + optstr = nvkm_stropt(optstr, opt, &arglen); + if (optstr) { + if (!strncasecmpz(optstr, "0", arglen) || + !strncasecmpz(optstr, "no", arglen) || + !strncasecmpz(optstr, "off", arglen) || + !strncasecmpz(optstr, "false", arglen)) + value = false; + else + if (!strncasecmpz(optstr, "1", arglen) || + !strncasecmpz(optstr, "yes", arglen) || + !strncasecmpz(optstr, "on", arglen) || + !strncasecmpz(optstr, "true", arglen)) + value = true; + } + + return value; +} + +long +nvkm_longopt(const char *optstr, const char *opt, long value) +{ + long result = value; + int arglen; + char *s; + + optstr = nvkm_stropt(optstr, opt, &arglen); + if (optstr && (s = kstrndup(optstr, arglen, GFP_KERNEL))) { + int ret = kstrtol(s, 0, &value); + if (ret == 0) + result = value; + kfree(s); + } + + return result; +} + +int +nvkm_dbgopt(const char *optstr, const char *sub) +{ + int mode = 1, level = CONFIG_NOUVEAU_DEBUG_DEFAULT; + + while (optstr) { + int len = strcspn(optstr, ",="); + switch (optstr[len]) { + case '=': + if (strncasecmpz(optstr, sub, len)) + mode = 0; + optstr++; + break; + default: + if (mode) { + if (!strncasecmpz(optstr, "fatal", len)) + level = NV_DBG_FATAL; + else if (!strncasecmpz(optstr, "error", len)) + level = NV_DBG_ERROR; + else if (!strncasecmpz(optstr, "warn", len)) + level = NV_DBG_WARN; + else if (!strncasecmpz(optstr, "info", len)) + level = NV_DBG_INFO; + else if (!strncasecmpz(optstr, "debug", len)) + level = NV_DBG_DEBUG; + else if (!strncasecmpz(optstr, "trace", len)) + level = NV_DBG_TRACE; + else if (!strncasecmpz(optstr, "paranoia", len)) + level = NV_DBG_PARANOIA; + else if (!strncasecmpz(optstr, "spam", len)) + level = NV_DBG_SPAM; + } + + if (optstr[len] != '\0') { + optstr++; + mode = 1; + break; + } + + return level; + } + optstr += len; + } + + return level; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/ramht.c b/drivers/gpu/drm/nouveau/nvkm/core/ramht.c new file mode 100644 index 000000000..8162e3d23 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/ramht.c @@ -0,0 +1,162 @@ +/* + * Copyright 2012 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include <core/ramht.h> +#include <core/engine.h> +#include <core/object.h> + +static u32 +nvkm_ramht_hash(struct nvkm_ramht *ramht, int chid, u32 handle) +{ + u32 hash = 0; + + while (handle) { + hash ^= (handle & ((1 << ramht->bits) - 1)); + handle >>= ramht->bits; + } + + hash ^= chid << (ramht->bits - 4); + return hash; +} + +struct nvkm_gpuobj * +nvkm_ramht_search(struct nvkm_ramht *ramht, int chid, u32 handle) +{ + u32 co, ho; + + co = ho = nvkm_ramht_hash(ramht, chid, handle); + do { + if (ramht->data[co].chid == chid) { + if (ramht->data[co].handle == handle) + return ramht->data[co].inst; + } + + if (++co >= ramht->size) + co = 0; + } while (co != ho); + + return NULL; +} + +static int +nvkm_ramht_update(struct nvkm_ramht *ramht, int co, struct nvkm_object *object, + int chid, int addr, u32 handle, u32 context) +{ + struct nvkm_ramht_data *data = &ramht->data[co]; + u64 inst = 0x00000040; /* just non-zero for <=g8x fifo ramht */ + int ret; + + nvkm_gpuobj_del(&data->inst); + data->chid = chid; + data->handle = handle; + + if (object) { + ret = nvkm_object_bind(object, ramht->parent, 16, &data->inst); + if (ret) { + if (ret != -ENODEV) { + data->chid = -1; + return ret; + } + data->inst = NULL; + } + + if (data->inst) { + if (ramht->device->card_type >= NV_50) + inst = data->inst->node->offset; + else + inst = data->inst->addr; + } + + if (addr < 0) context |= inst << -addr; + else context |= inst >> addr; + } + + nvkm_kmap(ramht->gpuobj); + nvkm_wo32(ramht->gpuobj, (co << 3) + 0, handle); + nvkm_wo32(ramht->gpuobj, (co << 3) + 4, context); + nvkm_done(ramht->gpuobj); + return co + 1; +} + +void +nvkm_ramht_remove(struct nvkm_ramht *ramht, int cookie) +{ + if (--cookie >= 0) + nvkm_ramht_update(ramht, cookie, NULL, -1, 0, 0, 0); +} + +int +nvkm_ramht_insert(struct nvkm_ramht *ramht, struct nvkm_object *object, + int chid, int addr, u32 handle, u32 context) +{ + u32 co, ho; + + if (nvkm_ramht_search(ramht, chid, handle)) + return -EEXIST; + + co = ho = nvkm_ramht_hash(ramht, chid, handle); + do { + if (ramht->data[co].chid < 0) { + return nvkm_ramht_update(ramht, co, object, chid, + addr, handle, context); + } + + if (++co >= ramht->size) + co = 0; + } while (co != ho); + + return -ENOSPC; +} + +void +nvkm_ramht_del(struct nvkm_ramht **pramht) +{ + struct nvkm_ramht *ramht = *pramht; + if (ramht) { + nvkm_gpuobj_del(&ramht->gpuobj); + vfree(*pramht); + *pramht = NULL; + } +} + +int +nvkm_ramht_new(struct nvkm_device *device, u32 size, u32 align, + struct nvkm_gpuobj *parent, struct nvkm_ramht **pramht) +{ + struct nvkm_ramht *ramht; + int ret, i; + + if (!(ramht = *pramht = vzalloc(struct_size(ramht, data, (size >> 3))))) + return -ENOMEM; + + ramht->device = device; + ramht->parent = parent; + ramht->size = size >> 3; + ramht->bits = order_base_2(ramht->size); + for (i = 0; i < ramht->size; i++) + ramht->data[i].chid = -1; + + ret = nvkm_gpuobj_new(ramht->device, size, align, true, + ramht->parent, &ramht->gpuobj); + if (ret) + nvkm_ramht_del(pramht); + return ret; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/subdev.c b/drivers/gpu/drm/nouveau/nvkm/core/subdev.c new file mode 100644 index 000000000..6c20e827a --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/subdev.c @@ -0,0 +1,275 @@ +/* + * Copyright 2012 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Ben Skeggs + */ +#include <core/subdev.h> +#include <core/device.h> +#include <core/option.h> +#include <subdev/mc.h> + +const char * +nvkm_subdev_type[NVKM_SUBDEV_NR] = { +#define NVKM_LAYOUT_ONCE(type,data,ptr,...) [type] = #ptr, +#define NVKM_LAYOUT_INST(A...) NVKM_LAYOUT_ONCE(A) +#include <core/layout.h> +#undef NVKM_LAYOUT_ONCE +#undef NVKM_LAYOUT_INST +}; + +void +nvkm_subdev_intr(struct nvkm_subdev *subdev) +{ + if (subdev->func->intr) + subdev->func->intr(subdev); +} + +int +nvkm_subdev_info(struct nvkm_subdev *subdev, u64 mthd, u64 *data) +{ + if (subdev->func->info) + return subdev->func->info(subdev, mthd, data); + return -ENOSYS; +} + +int +nvkm_subdev_fini(struct nvkm_subdev *subdev, bool suspend) +{ + struct nvkm_device *device = subdev->device; + const char *action = suspend ? "suspend" : subdev->use.enabled ? "fini" : "reset"; + s64 time; + + nvkm_trace(subdev, "%s running...\n", action); + time = ktime_to_us(ktime_get()); + + if (subdev->func->fini) { + int ret = subdev->func->fini(subdev, suspend); + if (ret) { + nvkm_error(subdev, "%s failed, %d\n", action, ret); + if (suspend) + return ret; + } + } + subdev->use.enabled = false; + + nvkm_mc_reset(device, subdev->type, subdev->inst); + + time = ktime_to_us(ktime_get()) - time; + nvkm_trace(subdev, "%s completed in %lldus\n", action, time); + return 0; +} + +int +nvkm_subdev_preinit(struct nvkm_subdev *subdev) +{ + s64 time; + + nvkm_trace(subdev, "preinit running...\n"); + time = ktime_to_us(ktime_get()); + + if (subdev->func->preinit) { + int ret = subdev->func->preinit(subdev); + if (ret) { + nvkm_error(subdev, "preinit failed, %d\n", ret); + return ret; + } + } + + time = ktime_to_us(ktime_get()) - time; + nvkm_trace(subdev, "preinit completed in %lldus\n", time); + return 0; +} + +static int +nvkm_subdev_oneinit_(struct nvkm_subdev *subdev) +{ + s64 time; + int ret; + + if (!subdev->func->oneinit || subdev->oneinit) + return 0; + + nvkm_trace(subdev, "one-time init running...\n"); + time = ktime_to_us(ktime_get()); + ret = subdev->func->oneinit(subdev); + if (ret) { + nvkm_error(subdev, "one-time init failed, %d\n", ret); + return ret; + } + + subdev->oneinit = true; + time = ktime_to_us(ktime_get()) - time; + nvkm_trace(subdev, "one-time init completed in %lldus\n", time); + return 0; +} + +static int +nvkm_subdev_init_(struct nvkm_subdev *subdev) +{ + s64 time; + int ret; + + if (subdev->use.enabled) { + nvkm_trace(subdev, "init skipped, already running\n"); + return 0; + } + + nvkm_trace(subdev, "init running...\n"); + time = ktime_to_us(ktime_get()); + + ret = nvkm_subdev_oneinit_(subdev); + if (ret) + return ret; + + subdev->use.enabled = true; + + if (subdev->func->init) { + ret = subdev->func->init(subdev); + if (ret) { + nvkm_error(subdev, "init failed, %d\n", ret); + return ret; + } + } + + time = ktime_to_us(ktime_get()) - time; + nvkm_trace(subdev, "init completed in %lldus\n", time); + return 0; +} + +int +nvkm_subdev_init(struct nvkm_subdev *subdev) +{ + int ret; + + mutex_lock(&subdev->use.mutex); + if (refcount_read(&subdev->use.refcount) == 0) { + nvkm_trace(subdev, "init skipped, no users\n"); + mutex_unlock(&subdev->use.mutex); + return 0; + } + + ret = nvkm_subdev_init_(subdev); + mutex_unlock(&subdev->use.mutex); + return ret; +} + +int +nvkm_subdev_oneinit(struct nvkm_subdev *subdev) +{ + int ret; + + mutex_lock(&subdev->use.mutex); + ret = nvkm_subdev_oneinit_(subdev); + mutex_unlock(&subdev->use.mutex); + return ret; +} + +void +nvkm_subdev_unref(struct nvkm_subdev *subdev) +{ + if (refcount_dec_and_mutex_lock(&subdev->use.refcount, &subdev->use.mutex)) { + nvkm_subdev_fini(subdev, false); + mutex_unlock(&subdev->use.mutex); + } +} + +int +nvkm_subdev_ref(struct nvkm_subdev *subdev) +{ + int ret; + + if (subdev && !refcount_inc_not_zero(&subdev->use.refcount)) { + mutex_lock(&subdev->use.mutex); + if (!refcount_inc_not_zero(&subdev->use.refcount)) { + if ((ret = nvkm_subdev_init_(subdev))) { + mutex_unlock(&subdev->use.mutex); + return ret; + } + + refcount_set(&subdev->use.refcount, 1); + } + mutex_unlock(&subdev->use.mutex); + } + + return 0; +} + +void +nvkm_subdev_del(struct nvkm_subdev **psubdev) +{ + struct nvkm_subdev *subdev = *psubdev; + s64 time; + + if (subdev && !WARN_ON(!subdev->func)) { + nvkm_trace(subdev, "destroy running...\n"); + time = ktime_to_us(ktime_get()); + list_del(&subdev->head); + if (subdev->func->dtor) + *psubdev = subdev->func->dtor(subdev); + mutex_destroy(&subdev->use.mutex); + time = ktime_to_us(ktime_get()) - time; + nvkm_trace(subdev, "destroy completed in %lldus\n", time); + kfree(*psubdev); + *psubdev = NULL; + } +} + +void +nvkm_subdev_disable(struct nvkm_device *device, enum nvkm_subdev_type type, int inst) +{ + struct nvkm_subdev *subdev; + list_for_each_entry(subdev, &device->subdev, head) { + if (subdev->type == type && subdev->inst == inst) { + *subdev->pself = NULL; + nvkm_subdev_del(&subdev); + break; + } + } +} + +void +__nvkm_subdev_ctor(const struct nvkm_subdev_func *func, struct nvkm_device *device, + enum nvkm_subdev_type type, int inst, struct nvkm_subdev *subdev) +{ + subdev->func = func; + subdev->device = device; + subdev->type = type; + subdev->inst = inst < 0 ? 0 : inst; + + if (inst >= 0) + snprintf(subdev->name, sizeof(subdev->name), "%s%d", nvkm_subdev_type[type], inst); + else + strscpy(subdev->name, nvkm_subdev_type[type], sizeof(subdev->name)); + subdev->debug = nvkm_dbgopt(device->dbgopt, subdev->name); + + refcount_set(&subdev->use.refcount, 1); + list_add_tail(&subdev->head, &device->subdev); +} + +int +nvkm_subdev_new_(const struct nvkm_subdev_func *func, struct nvkm_device *device, + enum nvkm_subdev_type type, int inst, struct nvkm_subdev **psubdev) +{ + if (!(*psubdev = kzalloc(sizeof(**psubdev), GFP_KERNEL))) + return -ENOMEM; + nvkm_subdev_ctor(func, device, type, inst, *psubdev); + return 0; +} diff --git a/drivers/gpu/drm/nouveau/nvkm/core/uevent.c b/drivers/gpu/drm/nouveau/nvkm/core/uevent.c new file mode 100644 index 000000000..ba9d9edae --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/core/uevent.c @@ -0,0 +1,157 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#define nvkm_uevent(p) container_of((p), struct nvkm_uevent, object) +#include <core/event.h> +#include <core/client.h> + +#include <nvif/if000e.h> + +struct nvkm_uevent { + struct nvkm_object object; + struct nvkm_object *parent; + nvkm_uevent_func func; + bool wait; + + struct nvkm_event_ntfy ntfy; + atomic_t allowed; +}; + +static int +nvkm_uevent_mthd_block(struct nvkm_uevent *uevent, union nvif_event_block_args *args, u32 argc) +{ + if (argc != sizeof(args->vn)) + return -ENOSYS; + + nvkm_event_ntfy_block(&uevent->ntfy); + atomic_set(&uevent->allowed, 0); + return 0; +} + +static int +nvkm_uevent_mthd_allow(struct nvkm_uevent *uevent, union nvif_event_allow_args *args, u32 argc) +{ + if (argc != sizeof(args->vn)) + return -ENOSYS; + + nvkm_event_ntfy_allow(&uevent->ntfy); + atomic_set(&uevent->allowed, 1); + return 0; +} + +static int +nvkm_uevent_mthd(struct nvkm_object *object, u32 mthd, void *argv, u32 argc) +{ + struct nvkm_uevent *uevent = nvkm_uevent(object); + + switch (mthd) { + case NVIF_EVENT_V0_ALLOW: return nvkm_uevent_mthd_allow(uevent, argv, argc); + case NVIF_EVENT_V0_BLOCK: return nvkm_uevent_mthd_block(uevent, argv, argc); + default: + break; + } + + return -EINVAL; +} + +static int +nvkm_uevent_fini(struct nvkm_object *object, bool suspend) +{ + struct nvkm_uevent *uevent = nvkm_uevent(object); + + nvkm_event_ntfy_block(&uevent->ntfy); + return 0; +} + +static int +nvkm_uevent_init(struct nvkm_object *object) +{ + struct nvkm_uevent *uevent = nvkm_uevent(object); + + if (atomic_read(&uevent->allowed)) + nvkm_event_ntfy_allow(&uevent->ntfy); + + return 0; +} + +static void * +nvkm_uevent_dtor(struct nvkm_object *object) +{ + struct nvkm_uevent *uevent = nvkm_uevent(object); + + nvkm_event_ntfy_del(&uevent->ntfy); + return uevent; +} + +static const struct nvkm_object_func +nvkm_uevent = { + .dtor = nvkm_uevent_dtor, + .init = nvkm_uevent_init, + .fini = nvkm_uevent_fini, + .mthd = nvkm_uevent_mthd, +}; + +static int +nvkm_uevent_ntfy(struct nvkm_event_ntfy *ntfy, u32 bits) +{ + struct nvkm_uevent *uevent = container_of(ntfy, typeof(*uevent), ntfy); + struct nvkm_client *client = uevent->object.client; + + if (uevent->func) + return uevent->func(uevent->parent, uevent->object.token, bits); + + return client->event(uevent->object.token, NULL, 0); +} + +int +nvkm_uevent_add(struct nvkm_uevent *uevent, struct nvkm_event *event, int id, u32 bits, + nvkm_uevent_func func) +{ + if (WARN_ON(uevent->func)) + return -EBUSY; + + nvkm_event_ntfy_add(event, id, bits, uevent->wait, nvkm_uevent_ntfy, &uevent->ntfy); + uevent->func = func; + return 0; +} + +int +nvkm_uevent_new(const struct nvkm_oclass *oclass, void *argv, u32 argc, + struct nvkm_object **pobject) +{ + struct nvkm_object *parent = oclass->parent; + struct nvkm_uevent *uevent; + union nvif_event_args *args = argv; + + if (argc < sizeof(args->v0) || args->v0.version != 0) + return -ENOSYS; + + if (!(uevent = kzalloc(sizeof(*uevent), GFP_KERNEL))) + return -ENOMEM; + *pobject = &uevent->object; + + nvkm_object_ctor(&nvkm_uevent, oclass, &uevent->object); + uevent->parent = parent; + uevent->func = NULL; + uevent->wait = args->v0.wait; + uevent->ntfy.event = NULL; + return parent->func->uevent(parent, &args->v0.data, argc - sizeof(args->v0), uevent); +} |