diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /arch/arm/mach-omap2/clockdomain.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'arch/arm/mach-omap2/clockdomain.c')
-rw-r--r-- | arch/arm/mach-omap2/clockdomain.c | 1301 |
1 files changed, 1301 insertions, 0 deletions
diff --git a/arch/arm/mach-omap2/clockdomain.c b/arch/arm/mach-omap2/clockdomain.c new file mode 100644 index 000000000..d145e7ac7 --- /dev/null +++ b/arch/arm/mach-omap2/clockdomain.c @@ -0,0 +1,1301 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * OMAP2/3/4 clockdomain framework functions + * + * Copyright (C) 2008-2011 Texas Instruments, Inc. + * Copyright (C) 2008-2011 Nokia Corporation + * + * Written by Paul Walmsley and Jouni Högander + * Added OMAP4 specific support by Abhijit Pagare <abhijitpagare@ti.com> + */ +#undef DEBUG + +#include <linux/kernel.h> +#include <linux/device.h> +#include <linux/list.h> +#include <linux/errno.h> +#include <linux/string.h> +#include <linux/delay.h> +#include <linux/clk.h> +#include <linux/limits.h> +#include <linux/err.h> +#include <linux/clk-provider.h> +#include <linux/cpu_pm.h> + +#include <linux/io.h> + +#include <linux/bitops.h> + +#include "soc.h" +#include "clock.h" +#include "clockdomain.h" +#include "pm.h" + +/* clkdm_list contains all registered struct clockdomains */ +static LIST_HEAD(clkdm_list); + +/* array of clockdomain deps to be added/removed when clkdm in hwsup mode */ +static struct clkdm_autodep *autodeps; + +static struct clkdm_ops *arch_clkdm; +void clkdm_save_context(void); +void clkdm_restore_context(void); + +/* Private functions */ + +static struct clockdomain *_clkdm_lookup(const char *name) +{ + struct clockdomain *clkdm, *temp_clkdm; + + if (!name) + return NULL; + + clkdm = NULL; + + list_for_each_entry(temp_clkdm, &clkdm_list, node) { + if (!strcmp(name, temp_clkdm->name)) { + clkdm = temp_clkdm; + break; + } + } + + return clkdm; +} + +/** + * _clkdm_register - register a clockdomain + * @clkdm: struct clockdomain * to register + * + * Adds a clockdomain to the internal clockdomain list. + * Returns -EINVAL if given a null pointer, -EEXIST if a clockdomain is + * already registered by the provided name, or 0 upon success. + */ +static int _clkdm_register(struct clockdomain *clkdm) +{ + struct powerdomain *pwrdm; + + if (!clkdm || !clkdm->name) + return -EINVAL; + + pwrdm = pwrdm_lookup(clkdm->pwrdm.name); + if (!pwrdm) { + pr_err("clockdomain: %s: powerdomain %s does not exist\n", + clkdm->name, clkdm->pwrdm.name); + return -EINVAL; + } + clkdm->pwrdm.ptr = pwrdm; + + /* Verify that the clockdomain is not already registered */ + if (_clkdm_lookup(clkdm->name)) + return -EEXIST; + + list_add(&clkdm->node, &clkdm_list); + + pwrdm_add_clkdm(pwrdm, clkdm); + + pr_debug("clockdomain: registered %s\n", clkdm->name); + + return 0; +} + +/* _clkdm_deps_lookup - look up the specified clockdomain in a clkdm list */ +static struct clkdm_dep *_clkdm_deps_lookup(struct clockdomain *clkdm, + struct clkdm_dep *deps) +{ + struct clkdm_dep *cd; + + if (!clkdm || !deps) + return ERR_PTR(-EINVAL); + + for (cd = deps; cd->clkdm_name; cd++) { + if (!cd->clkdm && cd->clkdm_name) + cd->clkdm = _clkdm_lookup(cd->clkdm_name); + + if (cd->clkdm == clkdm) + break; + } + + if (!cd->clkdm_name) + return ERR_PTR(-ENOENT); + + return cd; +} + +/** + * _autodep_lookup - resolve autodep clkdm names to clkdm pointers; store + * @autodep: struct clkdm_autodep * to resolve + * + * Resolve autodep clockdomain names to clockdomain pointers via + * clkdm_lookup() and store the pointers in the autodep structure. An + * "autodep" is a clockdomain sleep/wakeup dependency that is + * automatically added and removed whenever clocks in the associated + * clockdomain are enabled or disabled (respectively) when the + * clockdomain is in hardware-supervised mode. Meant to be called + * once at clockdomain layer initialization, since these should remain + * fixed for a particular architecture. No return value. + * + * XXX autodeps are deprecated and should be removed at the earliest + * opportunity + */ +static void _autodep_lookup(struct clkdm_autodep *autodep) +{ + struct clockdomain *clkdm; + + if (!autodep) + return; + + clkdm = clkdm_lookup(autodep->clkdm.name); + if (!clkdm) { + pr_err("clockdomain: autodeps: clockdomain %s does not exist\n", + autodep->clkdm.name); + clkdm = ERR_PTR(-ENOENT); + } + autodep->clkdm.ptr = clkdm; +} + +/** + * _resolve_clkdm_deps() - resolve clkdm_names in @clkdm_deps to clkdms + * @clkdm: clockdomain that we are resolving dependencies for + * @clkdm_deps: ptr to array of struct clkdm_deps to resolve + * + * Iterates through @clkdm_deps, looking up the struct clockdomain named by + * clkdm_name and storing the clockdomain pointer in the struct clkdm_dep. + * No return value. + */ +static void _resolve_clkdm_deps(struct clockdomain *clkdm, + struct clkdm_dep *clkdm_deps) +{ + struct clkdm_dep *cd; + + for (cd = clkdm_deps; cd && cd->clkdm_name; cd++) { + if (cd->clkdm) + continue; + cd->clkdm = _clkdm_lookup(cd->clkdm_name); + + WARN(!cd->clkdm, "clockdomain: %s: could not find clkdm %s while resolving dependencies - should never happen", + clkdm->name, cd->clkdm_name); + } +} + +/** + * _clkdm_add_wkdep - add a wakeup dependency from clkdm2 to clkdm1 (lockless) + * @clkdm1: wake this struct clockdomain * up (dependent) + * @clkdm2: when this struct clockdomain * wakes up (source) + * + * When the clockdomain represented by @clkdm2 wakes up, wake up + * @clkdm1. Implemented in hardware on the OMAP, this feature is + * designed to reduce wakeup latency of the dependent clockdomain @clkdm1. + * Returns -EINVAL if presented with invalid clockdomain pointers, + * -ENOENT if @clkdm2 cannot wake up clkdm1 in hardware, or 0 upon + * success. + */ +static int _clkdm_add_wkdep(struct clockdomain *clkdm1, + struct clockdomain *clkdm2) +{ + struct clkdm_dep *cd; + int ret = 0; + + if (!clkdm1 || !clkdm2) + return -EINVAL; + + cd = _clkdm_deps_lookup(clkdm2, clkdm1->wkdep_srcs); + if (IS_ERR(cd)) + ret = PTR_ERR(cd); + + if (!arch_clkdm || !arch_clkdm->clkdm_add_wkdep) + ret = -EINVAL; + + if (ret) { + pr_debug("clockdomain: hardware cannot set/clear wake up of %s when %s wakes up\n", + clkdm1->name, clkdm2->name); + return ret; + } + + cd->wkdep_usecount++; + if (cd->wkdep_usecount == 1) { + pr_debug("clockdomain: hardware will wake up %s when %s wakes up\n", + clkdm1->name, clkdm2->name); + + ret = arch_clkdm->clkdm_add_wkdep(clkdm1, clkdm2); + } + + return ret; +} + +/** + * _clkdm_del_wkdep - remove a wakeup dep from clkdm2 to clkdm1 (lockless) + * @clkdm1: wake this struct clockdomain * up (dependent) + * @clkdm2: when this struct clockdomain * wakes up (source) + * + * Remove a wakeup dependency causing @clkdm1 to wake up when @clkdm2 + * wakes up. Returns -EINVAL if presented with invalid clockdomain + * pointers, -ENOENT if @clkdm2 cannot wake up clkdm1 in hardware, or + * 0 upon success. + */ +static int _clkdm_del_wkdep(struct clockdomain *clkdm1, + struct clockdomain *clkdm2) +{ + struct clkdm_dep *cd; + int ret = 0; + + if (!clkdm1 || !clkdm2) + return -EINVAL; + + cd = _clkdm_deps_lookup(clkdm2, clkdm1->wkdep_srcs); + if (IS_ERR(cd)) + ret = PTR_ERR(cd); + + if (!arch_clkdm || !arch_clkdm->clkdm_del_wkdep) + ret = -EINVAL; + + if (ret) { + pr_debug("clockdomain: hardware cannot set/clear wake up of %s when %s wakes up\n", + clkdm1->name, clkdm2->name); + return ret; + } + + cd->wkdep_usecount--; + if (cd->wkdep_usecount == 0) { + pr_debug("clockdomain: hardware will no longer wake up %s after %s wakes up\n", + clkdm1->name, clkdm2->name); + + ret = arch_clkdm->clkdm_del_wkdep(clkdm1, clkdm2); + } + + return ret; +} + +/** + * _clkdm_add_sleepdep - add a sleep dependency from clkdm2 to clkdm1 (lockless) + * @clkdm1: prevent this struct clockdomain * from sleeping (dependent) + * @clkdm2: when this struct clockdomain * is active (source) + * + * Prevent @clkdm1 from automatically going inactive (and then to + * retention or off) if @clkdm2 is active. Returns -EINVAL if + * presented with invalid clockdomain pointers or called on a machine + * that does not support software-configurable hardware sleep + * dependencies, -ENOENT if the specified dependency cannot be set in + * hardware, or 0 upon success. + */ +static int _clkdm_add_sleepdep(struct clockdomain *clkdm1, + struct clockdomain *clkdm2) +{ + struct clkdm_dep *cd; + int ret = 0; + + if (!clkdm1 || !clkdm2) + return -EINVAL; + + cd = _clkdm_deps_lookup(clkdm2, clkdm1->sleepdep_srcs); + if (IS_ERR(cd)) + ret = PTR_ERR(cd); + + if (!arch_clkdm || !arch_clkdm->clkdm_add_sleepdep) + ret = -EINVAL; + + if (ret) { + pr_debug("clockdomain: hardware cannot set/clear sleep dependency affecting %s from %s\n", + clkdm1->name, clkdm2->name); + return ret; + } + + cd->sleepdep_usecount++; + if (cd->sleepdep_usecount == 1) { + pr_debug("clockdomain: will prevent %s from sleeping if %s is active\n", + clkdm1->name, clkdm2->name); + + ret = arch_clkdm->clkdm_add_sleepdep(clkdm1, clkdm2); + } + + return ret; +} + +/** + * _clkdm_del_sleepdep - remove a sleep dep from clkdm2 to clkdm1 (lockless) + * @clkdm1: prevent this struct clockdomain * from sleeping (dependent) + * @clkdm2: when this struct clockdomain * is active (source) + * + * Allow @clkdm1 to automatically go inactive (and then to retention or + * off), independent of the activity state of @clkdm2. Returns -EINVAL + * if presented with invalid clockdomain pointers or called on a machine + * that does not support software-configurable hardware sleep dependencies, + * -ENOENT if the specified dependency cannot be cleared in hardware, or + * 0 upon success. + */ +static int _clkdm_del_sleepdep(struct clockdomain *clkdm1, + struct clockdomain *clkdm2) +{ + struct clkdm_dep *cd; + int ret = 0; + + if (!clkdm1 || !clkdm2) + return -EINVAL; + + cd = _clkdm_deps_lookup(clkdm2, clkdm1->sleepdep_srcs); + if (IS_ERR(cd)) + ret = PTR_ERR(cd); + + if (!arch_clkdm || !arch_clkdm->clkdm_del_sleepdep) + ret = -EINVAL; + + if (ret) { + pr_debug("clockdomain: hardware cannot set/clear sleep dependency affecting %s from %s\n", + clkdm1->name, clkdm2->name); + return ret; + } + + cd->sleepdep_usecount--; + if (cd->sleepdep_usecount == 0) { + pr_debug("clockdomain: will no longer prevent %s from sleeping if %s is active\n", + clkdm1->name, clkdm2->name); + + ret = arch_clkdm->clkdm_del_sleepdep(clkdm1, clkdm2); + } + + return ret; +} + +/* Public functions */ + +/** + * clkdm_register_platform_funcs - register clockdomain implementation fns + * @co: func pointers for arch specific implementations + * + * Register the list of function pointers used to implement the + * clockdomain functions on different OMAP SoCs. Should be called + * before any other clkdm_register*() function. Returns -EINVAL if + * @co is null, -EEXIST if platform functions have already been + * registered, or 0 upon success. + */ +int clkdm_register_platform_funcs(struct clkdm_ops *co) +{ + if (!co) + return -EINVAL; + + if (arch_clkdm) + return -EEXIST; + + arch_clkdm = co; + + return 0; +}; + +/** + * clkdm_register_clkdms - register SoC clockdomains + * @cs: pointer to an array of struct clockdomain to register + * + * Register the clockdomains available on a particular OMAP SoC. Must + * be called after clkdm_register_platform_funcs(). May be called + * multiple times. Returns -EACCES if called before + * clkdm_register_platform_funcs(); -EINVAL if the argument @cs is + * null; or 0 upon success. + */ +int clkdm_register_clkdms(struct clockdomain **cs) +{ + struct clockdomain **c = NULL; + + if (!arch_clkdm) + return -EACCES; + + if (!cs) + return -EINVAL; + + for (c = cs; *c; c++) + _clkdm_register(*c); + + return 0; +} + +/** + * clkdm_register_autodeps - register autodeps (if required) + * @ia: pointer to a static array of struct clkdm_autodep to register + * + * Register clockdomain "automatic dependencies." These are + * clockdomain wakeup and sleep dependencies that are automatically + * added whenever the first clock inside a clockdomain is enabled, and + * removed whenever the last clock inside a clockdomain is disabled. + * These are currently only used on OMAP3 devices, and are deprecated, + * since they waste energy. However, until the OMAP2/3 IP block + * enable/disable sequence can be converted to match the OMAP4 + * sequence, they are needed. + * + * Must be called only after all of the SoC clockdomains are + * registered, since the function will resolve autodep clockdomain + * names into clockdomain pointers. + * + * The struct clkdm_autodep @ia array must be static, as this function + * does not copy the array elements. + * + * Returns -EACCES if called before any clockdomains have been + * registered, -EINVAL if called with a null @ia argument, -EEXIST if + * autodeps have already been registered, or 0 upon success. + */ +int clkdm_register_autodeps(struct clkdm_autodep *ia) +{ + struct clkdm_autodep *a = NULL; + + if (list_empty(&clkdm_list)) + return -EACCES; + + if (!ia) + return -EINVAL; + + if (autodeps) + return -EEXIST; + + autodeps = ia; + for (a = autodeps; a->clkdm.ptr; a++) + _autodep_lookup(a); + + return 0; +} + +static int cpu_notifier(struct notifier_block *nb, unsigned long cmd, void *v) +{ + switch (cmd) { + case CPU_CLUSTER_PM_ENTER: + if (enable_off_mode) + clkdm_save_context(); + break; + case CPU_CLUSTER_PM_EXIT: + if (enable_off_mode) + clkdm_restore_context(); + break; + } + + return NOTIFY_OK; +} + +/** + * clkdm_complete_init - set up the clockdomain layer + * + * Put all clockdomains into software-supervised mode; PM code should + * later enable hardware-supervised mode as appropriate. Must be + * called after clkdm_register_clkdms(). Returns -EACCES if called + * before clkdm_register_clkdms(), or 0 upon success. + */ +int clkdm_complete_init(void) +{ + struct clockdomain *clkdm; + static struct notifier_block nb; + + if (list_empty(&clkdm_list)) + return -EACCES; + + list_for_each_entry(clkdm, &clkdm_list, node) { + clkdm_deny_idle(clkdm); + + _resolve_clkdm_deps(clkdm, clkdm->wkdep_srcs); + clkdm_clear_all_wkdeps(clkdm); + + _resolve_clkdm_deps(clkdm, clkdm->sleepdep_srcs); + clkdm_clear_all_sleepdeps(clkdm); + } + + /* Only AM43XX can lose clkdm context during rtc-ddr suspend */ + if (soc_is_am43xx()) { + nb.notifier_call = cpu_notifier; + cpu_pm_register_notifier(&nb); + } + + return 0; +} + +/** + * clkdm_lookup - look up a clockdomain by name, return a pointer + * @name: name of clockdomain + * + * Find a registered clockdomain by its name @name. Returns a pointer + * to the struct clockdomain if found, or NULL otherwise. + */ +struct clockdomain *clkdm_lookup(const char *name) +{ + struct clockdomain *clkdm, *temp_clkdm; + + if (!name) + return NULL; + + clkdm = NULL; + + list_for_each_entry(temp_clkdm, &clkdm_list, node) { + if (!strcmp(name, temp_clkdm->name)) { + clkdm = temp_clkdm; + break; + } + } + + return clkdm; +} + +/** + * clkdm_for_each - call function on each registered clockdomain + * @fn: callback function * + * + * Call the supplied function @fn for each registered clockdomain. + * The callback function @fn can return anything but 0 to bail + * out early from the iterator. The callback function is called with + * the clkdm_mutex held, so no clockdomain structure manipulation + * functions should be called from the callback, although hardware + * clockdomain control functions are fine. Returns the last return + * value of the callback function, which should be 0 for success or + * anything else to indicate failure; or -EINVAL if the function pointer + * is null. + */ +int clkdm_for_each(int (*fn)(struct clockdomain *clkdm, void *user), + void *user) +{ + struct clockdomain *clkdm; + int ret = 0; + + if (!fn) + return -EINVAL; + + list_for_each_entry(clkdm, &clkdm_list, node) { + ret = (*fn)(clkdm, user); + if (ret) + break; + } + + return ret; +} + + +/** + * clkdm_get_pwrdm - return a ptr to the pwrdm that this clkdm resides in + * @clkdm: struct clockdomain * + * + * Return a pointer to the struct powerdomain that the specified clockdomain + * @clkdm exists in, or returns NULL if @clkdm is NULL. + */ +struct powerdomain *clkdm_get_pwrdm(struct clockdomain *clkdm) +{ + if (!clkdm) + return NULL; + + return clkdm->pwrdm.ptr; +} + + +/* Hardware clockdomain control */ + +/** + * clkdm_add_wkdep - add a wakeup dependency from clkdm2 to clkdm1 + * @clkdm1: wake this struct clockdomain * up (dependent) + * @clkdm2: when this struct clockdomain * wakes up (source) + * + * When the clockdomain represented by @clkdm2 wakes up, wake up + * @clkdm1. Implemented in hardware on the OMAP, this feature is + * designed to reduce wakeup latency of the dependent clockdomain @clkdm1. + * Returns -EINVAL if presented with invalid clockdomain pointers, + * -ENOENT if @clkdm2 cannot wake up clkdm1 in hardware, or 0 upon + * success. + */ +int clkdm_add_wkdep(struct clockdomain *clkdm1, struct clockdomain *clkdm2) +{ + struct clkdm_dep *cd; + int ret; + + if (!clkdm1 || !clkdm2) + return -EINVAL; + + cd = _clkdm_deps_lookup(clkdm2, clkdm1->wkdep_srcs); + if (IS_ERR(cd)) + return PTR_ERR(cd); + + pwrdm_lock(cd->clkdm->pwrdm.ptr); + ret = _clkdm_add_wkdep(clkdm1, clkdm2); + pwrdm_unlock(cd->clkdm->pwrdm.ptr); + + return ret; +} + +/** + * clkdm_del_wkdep - remove a wakeup dependency from clkdm2 to clkdm1 + * @clkdm1: wake this struct clockdomain * up (dependent) + * @clkdm2: when this struct clockdomain * wakes up (source) + * + * Remove a wakeup dependency causing @clkdm1 to wake up when @clkdm2 + * wakes up. Returns -EINVAL if presented with invalid clockdomain + * pointers, -ENOENT if @clkdm2 cannot wake up clkdm1 in hardware, or + * 0 upon success. + */ +int clkdm_del_wkdep(struct clockdomain *clkdm1, struct clockdomain *clkdm2) +{ + struct clkdm_dep *cd; + int ret; + + if (!clkdm1 || !clkdm2) + return -EINVAL; + + cd = _clkdm_deps_lookup(clkdm2, clkdm1->wkdep_srcs); + if (IS_ERR(cd)) + return PTR_ERR(cd); + + pwrdm_lock(cd->clkdm->pwrdm.ptr); + ret = _clkdm_del_wkdep(clkdm1, clkdm2); + pwrdm_unlock(cd->clkdm->pwrdm.ptr); + + return ret; +} + +/** + * clkdm_read_wkdep - read wakeup dependency state from clkdm2 to clkdm1 + * @clkdm1: wake this struct clockdomain * up (dependent) + * @clkdm2: when this struct clockdomain * wakes up (source) + * + * Return 1 if a hardware wakeup dependency exists wherein @clkdm1 will be + * awoken when @clkdm2 wakes up; 0 if dependency is not set; -EINVAL + * if either clockdomain pointer is invalid; or -ENOENT if the hardware + * is incapable. + * + * REVISIT: Currently this function only represents software-controllable + * wakeup dependencies. Wakeup dependencies fixed in hardware are not + * yet handled here. + */ +int clkdm_read_wkdep(struct clockdomain *clkdm1, struct clockdomain *clkdm2) +{ + struct clkdm_dep *cd; + int ret = 0; + + if (!clkdm1 || !clkdm2) + return -EINVAL; + + cd = _clkdm_deps_lookup(clkdm2, clkdm1->wkdep_srcs); + if (IS_ERR(cd)) + ret = PTR_ERR(cd); + + if (!arch_clkdm || !arch_clkdm->clkdm_read_wkdep) + ret = -EINVAL; + + if (ret) { + pr_debug("clockdomain: hardware cannot set/clear wake up of %s when %s wakes up\n", + clkdm1->name, clkdm2->name); + return ret; + } + + /* XXX It's faster to return the wkdep_usecount */ + return arch_clkdm->clkdm_read_wkdep(clkdm1, clkdm2); +} + +/** + * clkdm_clear_all_wkdeps - remove all wakeup dependencies from target clkdm + * @clkdm: struct clockdomain * to remove all wakeup dependencies from + * + * Remove all inter-clockdomain wakeup dependencies that could cause + * @clkdm to wake. Intended to be used during boot to initialize the + * PRCM to a known state, after all clockdomains are put into swsup idle + * and woken up. Returns -EINVAL if @clkdm pointer is invalid, or + * 0 upon success. + */ +int clkdm_clear_all_wkdeps(struct clockdomain *clkdm) +{ + if (!clkdm) + return -EINVAL; + + if (!arch_clkdm || !arch_clkdm->clkdm_clear_all_wkdeps) + return -EINVAL; + + return arch_clkdm->clkdm_clear_all_wkdeps(clkdm); +} + +/** + * clkdm_add_sleepdep - add a sleep dependency from clkdm2 to clkdm1 + * @clkdm1: prevent this struct clockdomain * from sleeping (dependent) + * @clkdm2: when this struct clockdomain * is active (source) + * + * Prevent @clkdm1 from automatically going inactive (and then to + * retention or off) if @clkdm2 is active. Returns -EINVAL if + * presented with invalid clockdomain pointers or called on a machine + * that does not support software-configurable hardware sleep + * dependencies, -ENOENT if the specified dependency cannot be set in + * hardware, or 0 upon success. + */ +int clkdm_add_sleepdep(struct clockdomain *clkdm1, struct clockdomain *clkdm2) +{ + struct clkdm_dep *cd; + int ret; + + if (!clkdm1 || !clkdm2) + return -EINVAL; + + cd = _clkdm_deps_lookup(clkdm2, clkdm1->wkdep_srcs); + if (IS_ERR(cd)) + return PTR_ERR(cd); + + pwrdm_lock(cd->clkdm->pwrdm.ptr); + ret = _clkdm_add_sleepdep(clkdm1, clkdm2); + pwrdm_unlock(cd->clkdm->pwrdm.ptr); + + return ret; +} + +/** + * clkdm_del_sleepdep - remove a sleep dependency from clkdm2 to clkdm1 + * @clkdm1: prevent this struct clockdomain * from sleeping (dependent) + * @clkdm2: when this struct clockdomain * is active (source) + * + * Allow @clkdm1 to automatically go inactive (and then to retention or + * off), independent of the activity state of @clkdm2. Returns -EINVAL + * if presented with invalid clockdomain pointers or called on a machine + * that does not support software-configurable hardware sleep dependencies, + * -ENOENT if the specified dependency cannot be cleared in hardware, or + * 0 upon success. + */ +int clkdm_del_sleepdep(struct clockdomain *clkdm1, struct clockdomain *clkdm2) +{ + struct clkdm_dep *cd; + int ret; + + if (!clkdm1 || !clkdm2) + return -EINVAL; + + cd = _clkdm_deps_lookup(clkdm2, clkdm1->wkdep_srcs); + if (IS_ERR(cd)) + return PTR_ERR(cd); + + pwrdm_lock(cd->clkdm->pwrdm.ptr); + ret = _clkdm_del_sleepdep(clkdm1, clkdm2); + pwrdm_unlock(cd->clkdm->pwrdm.ptr); + + return ret; +} + +/** + * clkdm_read_sleepdep - read sleep dependency state from clkdm2 to clkdm1 + * @clkdm1: prevent this struct clockdomain * from sleeping (dependent) + * @clkdm2: when this struct clockdomain * is active (source) + * + * Return 1 if a hardware sleep dependency exists wherein @clkdm1 will + * not be allowed to automatically go inactive if @clkdm2 is active; + * 0 if @clkdm1's automatic power state inactivity transition is independent + * of @clkdm2's; -EINVAL if either clockdomain pointer is invalid or called + * on a machine that does not support software-configurable hardware sleep + * dependencies; or -ENOENT if the hardware is incapable. + * + * REVISIT: Currently this function only represents software-controllable + * sleep dependencies. Sleep dependencies fixed in hardware are not + * yet handled here. + */ +int clkdm_read_sleepdep(struct clockdomain *clkdm1, struct clockdomain *clkdm2) +{ + struct clkdm_dep *cd; + int ret = 0; + + if (!clkdm1 || !clkdm2) + return -EINVAL; + + cd = _clkdm_deps_lookup(clkdm2, clkdm1->sleepdep_srcs); + if (IS_ERR(cd)) + ret = PTR_ERR(cd); + + if (!arch_clkdm || !arch_clkdm->clkdm_read_sleepdep) + ret = -EINVAL; + + if (ret) { + pr_debug("clockdomain: hardware cannot set/clear sleep dependency affecting %s from %s\n", + clkdm1->name, clkdm2->name); + return ret; + } + + /* XXX It's faster to return the sleepdep_usecount */ + return arch_clkdm->clkdm_read_sleepdep(clkdm1, clkdm2); +} + +/** + * clkdm_clear_all_sleepdeps - remove all sleep dependencies from target clkdm + * @clkdm: struct clockdomain * to remove all sleep dependencies from + * + * Remove all inter-clockdomain sleep dependencies that could prevent + * @clkdm from idling. Intended to be used during boot to initialize the + * PRCM to a known state, after all clockdomains are put into swsup idle + * and woken up. Returns -EINVAL if @clkdm pointer is invalid, or + * 0 upon success. + */ +int clkdm_clear_all_sleepdeps(struct clockdomain *clkdm) +{ + if (!clkdm) + return -EINVAL; + + if (!arch_clkdm || !arch_clkdm->clkdm_clear_all_sleepdeps) + return -EINVAL; + + return arch_clkdm->clkdm_clear_all_sleepdeps(clkdm); +} + +/** + * clkdm_sleep_nolock - force clockdomain sleep transition (lockless) + * @clkdm: struct clockdomain * + * + * Instruct the CM to force a sleep transition on the specified + * clockdomain @clkdm. Only for use by the powerdomain code. Returns + * -EINVAL if @clkdm is NULL or if clockdomain does not support + * software-initiated sleep; 0 upon success. + */ +static int clkdm_sleep_nolock(struct clockdomain *clkdm) +{ + int ret; + + if (!clkdm) + return -EINVAL; + + if (!(clkdm->flags & CLKDM_CAN_FORCE_SLEEP)) { + pr_debug("clockdomain: %s does not support forcing sleep via software\n", + clkdm->name); + return -EINVAL; + } + + if (!arch_clkdm || !arch_clkdm->clkdm_sleep) + return -EINVAL; + + pr_debug("clockdomain: forcing sleep on %s\n", clkdm->name); + + clkdm->_flags &= ~_CLKDM_FLAG_HWSUP_ENABLED; + ret = arch_clkdm->clkdm_sleep(clkdm); + ret |= pwrdm_state_switch_nolock(clkdm->pwrdm.ptr); + + return ret; +} + +/** + * clkdm_sleep - force clockdomain sleep transition + * @clkdm: struct clockdomain * + * + * Instruct the CM to force a sleep transition on the specified + * clockdomain @clkdm. Returns -EINVAL if @clkdm is NULL or if + * clockdomain does not support software-initiated sleep; 0 upon + * success. + */ +int clkdm_sleep(struct clockdomain *clkdm) +{ + int ret; + + pwrdm_lock(clkdm->pwrdm.ptr); + ret = clkdm_sleep_nolock(clkdm); + pwrdm_unlock(clkdm->pwrdm.ptr); + + return ret; +} + +/** + * clkdm_wakeup_nolock - force clockdomain wakeup transition (lockless) + * @clkdm: struct clockdomain * + * + * Instruct the CM to force a wakeup transition on the specified + * clockdomain @clkdm. Only for use by the powerdomain code. Returns + * -EINVAL if @clkdm is NULL or if the clockdomain does not support + * software-controlled wakeup; 0 upon success. + */ +static int clkdm_wakeup_nolock(struct clockdomain *clkdm) +{ + int ret; + + if (!clkdm) + return -EINVAL; + + if (!(clkdm->flags & CLKDM_CAN_FORCE_WAKEUP)) { + pr_debug("clockdomain: %s does not support forcing wakeup via software\n", + clkdm->name); + return -EINVAL; + } + + if (!arch_clkdm || !arch_clkdm->clkdm_wakeup) + return -EINVAL; + + pr_debug("clockdomain: forcing wakeup on %s\n", clkdm->name); + + clkdm->_flags &= ~_CLKDM_FLAG_HWSUP_ENABLED; + ret = arch_clkdm->clkdm_wakeup(clkdm); + ret |= pwrdm_state_switch_nolock(clkdm->pwrdm.ptr); + + return ret; +} + +/** + * clkdm_wakeup - force clockdomain wakeup transition + * @clkdm: struct clockdomain * + * + * Instruct the CM to force a wakeup transition on the specified + * clockdomain @clkdm. Returns -EINVAL if @clkdm is NULL or if the + * clockdomain does not support software-controlled wakeup; 0 upon + * success. + */ +int clkdm_wakeup(struct clockdomain *clkdm) +{ + int ret; + + pwrdm_lock(clkdm->pwrdm.ptr); + ret = clkdm_wakeup_nolock(clkdm); + pwrdm_unlock(clkdm->pwrdm.ptr); + + return ret; +} + +/** + * clkdm_allow_idle_nolock - enable hwsup idle transitions for clkdm + * @clkdm: struct clockdomain * + * + * Allow the hardware to automatically switch the clockdomain @clkdm + * into active or idle states, as needed by downstream clocks. If the + * clockdomain has any downstream clocks enabled in the clock + * framework, wkdep/sleepdep autodependencies are added; this is so + * device drivers can read and write to the device. Only for use by + * the powerdomain code. No return value. + */ +void clkdm_allow_idle_nolock(struct clockdomain *clkdm) +{ + if (!clkdm) + return; + + if (!WARN_ON(!clkdm->forcewake_count)) + clkdm->forcewake_count--; + + if (clkdm->forcewake_count) + return; + + if (!clkdm->usecount && (clkdm->flags & CLKDM_CAN_FORCE_SLEEP)) + clkdm_sleep_nolock(clkdm); + + if (!(clkdm->flags & CLKDM_CAN_ENABLE_AUTO)) + return; + + if (clkdm->flags & CLKDM_MISSING_IDLE_REPORTING) + return; + + if (!arch_clkdm || !arch_clkdm->clkdm_allow_idle) + return; + + pr_debug("clockdomain: enabling automatic idle transitions for %s\n", + clkdm->name); + + clkdm->_flags |= _CLKDM_FLAG_HWSUP_ENABLED; + arch_clkdm->clkdm_allow_idle(clkdm); + pwrdm_state_switch_nolock(clkdm->pwrdm.ptr); +} + +/** + * clkdm_allow_idle - enable hwsup idle transitions for clkdm + * @clkdm: struct clockdomain * + * + * Allow the hardware to automatically switch the clockdomain @clkdm into + * active or idle states, as needed by downstream clocks. If the + * clockdomain has any downstream clocks enabled in the clock + * framework, wkdep/sleepdep autodependencies are added; this is so + * device drivers can read and write to the device. No return value. + */ +void clkdm_allow_idle(struct clockdomain *clkdm) +{ + pwrdm_lock(clkdm->pwrdm.ptr); + clkdm_allow_idle_nolock(clkdm); + pwrdm_unlock(clkdm->pwrdm.ptr); +} + +/** + * clkdm_deny_idle - disable hwsup idle transitions for clkdm + * @clkdm: struct clockdomain * + * + * Prevent the hardware from automatically switching the clockdomain + * @clkdm into inactive or idle states. If the clockdomain has + * downstream clocks enabled in the clock framework, wkdep/sleepdep + * autodependencies are removed. Only for use by the powerdomain + * code. No return value. + */ +void clkdm_deny_idle_nolock(struct clockdomain *clkdm) +{ + if (!clkdm) + return; + + if (clkdm->forcewake_count++) + return; + + if (clkdm->flags & CLKDM_CAN_FORCE_WAKEUP) + clkdm_wakeup_nolock(clkdm); + + if (!(clkdm->flags & CLKDM_CAN_DISABLE_AUTO)) + return; + + if (clkdm->flags & CLKDM_MISSING_IDLE_REPORTING) + return; + + if (!arch_clkdm || !arch_clkdm->clkdm_deny_idle) + return; + + pr_debug("clockdomain: disabling automatic idle transitions for %s\n", + clkdm->name); + + clkdm->_flags &= ~_CLKDM_FLAG_HWSUP_ENABLED; + arch_clkdm->clkdm_deny_idle(clkdm); + pwrdm_state_switch_nolock(clkdm->pwrdm.ptr); +} + +/** + * clkdm_deny_idle - disable hwsup idle transitions for clkdm + * @clkdm: struct clockdomain * + * + * Prevent the hardware from automatically switching the clockdomain + * @clkdm into inactive or idle states. If the clockdomain has + * downstream clocks enabled in the clock framework, wkdep/sleepdep + * autodependencies are removed. No return value. + */ +void clkdm_deny_idle(struct clockdomain *clkdm) +{ + pwrdm_lock(clkdm->pwrdm.ptr); + clkdm_deny_idle_nolock(clkdm); + pwrdm_unlock(clkdm->pwrdm.ptr); +} + +/* Public autodep handling functions (deprecated) */ + +/** + * clkdm_add_autodeps - add auto sleepdeps/wkdeps to clkdm upon clock enable + * @clkdm: struct clockdomain * + * + * Add the "autodep" sleep & wakeup dependencies to clockdomain 'clkdm' + * in hardware-supervised mode. Meant to be called from clock framework + * when a clock inside clockdomain 'clkdm' is enabled. No return value. + * + * XXX autodeps are deprecated and should be removed at the earliest + * opportunity + */ +void clkdm_add_autodeps(struct clockdomain *clkdm) +{ + struct clkdm_autodep *autodep; + + if (!autodeps || clkdm->flags & CLKDM_NO_AUTODEPS) + return; + + for (autodep = autodeps; autodep->clkdm.ptr; autodep++) { + if (IS_ERR(autodep->clkdm.ptr)) + continue; + + pr_debug("clockdomain: %s: adding %s sleepdep/wkdep\n", + clkdm->name, autodep->clkdm.ptr->name); + + _clkdm_add_sleepdep(clkdm, autodep->clkdm.ptr); + _clkdm_add_wkdep(clkdm, autodep->clkdm.ptr); + } +} + +/** + * clkdm_del_autodeps - remove auto sleepdeps/wkdeps from clkdm + * @clkdm: struct clockdomain * + * + * Remove the "autodep" sleep & wakeup dependencies from clockdomain 'clkdm' + * in hardware-supervised mode. Meant to be called from clock framework + * when a clock inside clockdomain 'clkdm' is disabled. No return value. + * + * XXX autodeps are deprecated and should be removed at the earliest + * opportunity + */ +void clkdm_del_autodeps(struct clockdomain *clkdm) +{ + struct clkdm_autodep *autodep; + + if (!autodeps || clkdm->flags & CLKDM_NO_AUTODEPS) + return; + + for (autodep = autodeps; autodep->clkdm.ptr; autodep++) { + if (IS_ERR(autodep->clkdm.ptr)) + continue; + + pr_debug("clockdomain: %s: removing %s sleepdep/wkdep\n", + clkdm->name, autodep->clkdm.ptr->name); + + _clkdm_del_sleepdep(clkdm, autodep->clkdm.ptr); + _clkdm_del_wkdep(clkdm, autodep->clkdm.ptr); + } +} + +/* Clockdomain-to-clock/hwmod framework interface code */ + +/** + * clkdm_clk_enable - add an enabled downstream clock to this clkdm + * @clkdm: struct clockdomain * + * @clk: struct clk * of the enabled downstream clock + * + * Increment the usecount of the clockdomain @clkdm and ensure that it + * is awake before @clk is enabled. Intended to be called by + * clk_enable() code. If the clockdomain is in software-supervised + * idle mode, force the clockdomain to wake. If the clockdomain is in + * hardware-supervised idle mode, add clkdm-pwrdm autodependencies, to + * ensure that devices in the clockdomain can be read from/written to + * by on-chip processors. Returns -EINVAL if passed null pointers; + * returns 0 upon success or if the clockdomain is in hwsup idle mode. + */ +int clkdm_clk_enable(struct clockdomain *clkdm, struct clk *unused) +{ + if (!clkdm || !arch_clkdm || !arch_clkdm->clkdm_clk_enable) + return -EINVAL; + + pwrdm_lock(clkdm->pwrdm.ptr); + + /* + * For arch's with no autodeps, clkcm_clk_enable + * should be called for every clock instance or hwmod that is + * enabled, so the clkdm can be force woken up. + */ + clkdm->usecount++; + if (clkdm->usecount > 1 && autodeps) { + pwrdm_unlock(clkdm->pwrdm.ptr); + return 0; + } + + arch_clkdm->clkdm_clk_enable(clkdm); + pwrdm_state_switch_nolock(clkdm->pwrdm.ptr); + pwrdm_unlock(clkdm->pwrdm.ptr); + + pr_debug("clockdomain: %s: enabled\n", clkdm->name); + + return 0; +} + +/** + * clkdm_clk_disable - remove an enabled downstream clock from this clkdm + * @clkdm: struct clockdomain * + * @clk: struct clk * of the disabled downstream clock + * + * Decrement the usecount of this clockdomain @clkdm when @clk is + * disabled. Intended to be called by clk_disable() code. If the + * clockdomain usecount goes to 0, put the clockdomain to sleep + * (software-supervised mode) or remove the clkdm autodependencies + * (hardware-supervised mode). Returns -EINVAL if passed null + * pointers; -ERANGE if the @clkdm usecount underflows; or returns 0 + * upon success or if the clockdomain is in hwsup idle mode. + */ +int clkdm_clk_disable(struct clockdomain *clkdm, struct clk *clk) +{ + if (!clkdm || !arch_clkdm || !arch_clkdm->clkdm_clk_disable) + return -EINVAL; + + pwrdm_lock(clkdm->pwrdm.ptr); + + /* corner case: disabling unused clocks */ + if (clk && (__clk_get_enable_count(clk) == 0) && clkdm->usecount == 0) + goto ccd_exit; + + if (clkdm->usecount == 0) { + pwrdm_unlock(clkdm->pwrdm.ptr); + WARN_ON(1); /* underflow */ + return -ERANGE; + } + + clkdm->usecount--; + if (clkdm->usecount > 0) { + pwrdm_unlock(clkdm->pwrdm.ptr); + return 0; + } + + arch_clkdm->clkdm_clk_disable(clkdm); + pwrdm_state_switch_nolock(clkdm->pwrdm.ptr); + + pr_debug("clockdomain: %s: disabled\n", clkdm->name); + +ccd_exit: + pwrdm_unlock(clkdm->pwrdm.ptr); + + return 0; +} + +/** + * clkdm_hwmod_enable - add an enabled downstream hwmod to this clkdm + * @clkdm: struct clockdomain * + * @oh: struct omap_hwmod * of the enabled downstream hwmod + * + * Increment the usecount of the clockdomain @clkdm and ensure that it + * is awake before @oh is enabled. Intended to be called by + * module_enable() code. + * If the clockdomain is in software-supervised idle mode, force the + * clockdomain to wake. If the clockdomain is in hardware-supervised idle + * mode, add clkdm-pwrdm autodependencies, to ensure that devices in the + * clockdomain can be read from/written to by on-chip processors. + * Returns -EINVAL if passed null pointers; + * returns 0 upon success or if the clockdomain is in hwsup idle mode. + */ +int clkdm_hwmod_enable(struct clockdomain *clkdm, struct omap_hwmod *oh) +{ + /* The clkdm attribute does not exist yet prior OMAP4 */ + if (cpu_is_omap24xx() || cpu_is_omap34xx()) + return 0; + + /* + * XXX Rewrite this code to maintain a list of enabled + * downstream hwmods for debugging purposes? + */ + + if (!oh) + return -EINVAL; + + return clkdm_clk_enable(clkdm, NULL); +} + +/** + * clkdm_hwmod_disable - remove an enabled downstream hwmod from this clkdm + * @clkdm: struct clockdomain * + * @oh: struct omap_hwmod * of the disabled downstream hwmod + * + * Decrement the usecount of this clockdomain @clkdm when @oh is + * disabled. Intended to be called by module_disable() code. + * If the clockdomain usecount goes to 0, put the clockdomain to sleep + * (software-supervised mode) or remove the clkdm autodependencies + * (hardware-supervised mode). + * Returns -EINVAL if passed null pointers; -ERANGE if the @clkdm usecount + * underflows; or returns 0 upon success or if the clockdomain is in hwsup + * idle mode. + */ +int clkdm_hwmod_disable(struct clockdomain *clkdm, struct omap_hwmod *oh) +{ + /* The clkdm attribute does not exist yet prior OMAP4 */ + if (cpu_is_omap24xx() || cpu_is_omap34xx()) + return 0; + + if (!oh) + return -EINVAL; + + return clkdm_clk_disable(clkdm, NULL); +} + +/** + * _clkdm_save_context - save the context for the control of this clkdm + * + * Due to a suspend or hibernation operation, the state of the registers + * controlling this clkdm will be lost, save their context. + */ +static int _clkdm_save_context(struct clockdomain *clkdm, void *unused) +{ + if (!arch_clkdm || !arch_clkdm->clkdm_save_context) + return -EINVAL; + + return arch_clkdm->clkdm_save_context(clkdm); +} + +/** + * _clkdm_restore_context - restore context for control of this clkdm + * + * Restore the register values for this clockdomain. + */ +static int _clkdm_restore_context(struct clockdomain *clkdm, void *unused) +{ + if (!arch_clkdm || !arch_clkdm->clkdm_restore_context) + return -EINVAL; + + return arch_clkdm->clkdm_restore_context(clkdm); +} + +/** + * clkdm_save_context - Saves the context for each registered clkdm + * + * Save the context for each registered clockdomain. + */ +void clkdm_save_context(void) +{ + clkdm_for_each(_clkdm_save_context, NULL); +} + +/** + * clkdm_restore_context - Restores the context for each registered clkdm + * + * Restore the context for each registered clockdomain. + */ +void clkdm_restore_context(void) +{ + clkdm_for_each(_clkdm_restore_context, NULL); +} |