diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /arch/arm/common/mcpm_entry.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'arch/arm/common/mcpm_entry.c')
-rw-r--r-- | arch/arm/common/mcpm_entry.c | 456 |
1 files changed, 456 insertions, 0 deletions
diff --git a/arch/arm/common/mcpm_entry.c b/arch/arm/common/mcpm_entry.c new file mode 100644 index 000000000..8a9aeeb50 --- /dev/null +++ b/arch/arm/common/mcpm_entry.c @@ -0,0 +1,456 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * arch/arm/common/mcpm_entry.c -- entry point for multi-cluster PM + * + * Created by: Nicolas Pitre, March 2012 + * Copyright: (C) 2012-2013 Linaro Limited + */ + +#include <linux/export.h> +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/irqflags.h> +#include <linux/cpu_pm.h> + +#include <asm/mcpm.h> +#include <asm/cacheflush.h> +#include <asm/idmap.h> +#include <asm/cputype.h> +#include <asm/suspend.h> + +/* + * The public API for this code is documented in arch/arm/include/asm/mcpm.h. + * For a comprehensive description of the main algorithm used here, please + * see Documentation/arm/cluster-pm-race-avoidance.rst. + */ + +struct sync_struct mcpm_sync; + +/* + * __mcpm_cpu_going_down: Indicates that the cpu is being torn down. + * This must be called at the point of committing to teardown of a CPU. + * The CPU cache (SCTRL.C bit) is expected to still be active. + */ +static void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster) +{ + mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_GOING_DOWN; + sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu); +} + +/* + * __mcpm_cpu_down: Indicates that cpu teardown is complete and that the + * cluster can be torn down without disrupting this CPU. + * To avoid deadlocks, this must be called before a CPU is powered down. + * The CPU cache (SCTRL.C bit) is expected to be off. + * However L2 cache might or might not be active. + */ +static void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster) +{ + dmb(); + mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_DOWN; + sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu); + sev(); +} + +/* + * __mcpm_outbound_leave_critical: Leave the cluster teardown critical section. + * @state: the final state of the cluster: + * CLUSTER_UP: no destructive teardown was done and the cluster has been + * restored to the previous state (CPU cache still active); or + * CLUSTER_DOWN: the cluster has been torn-down, ready for power-off + * (CPU cache disabled, L2 cache either enabled or disabled). + */ +static void __mcpm_outbound_leave_critical(unsigned int cluster, int state) +{ + dmb(); + mcpm_sync.clusters[cluster].cluster = state; + sync_cache_w(&mcpm_sync.clusters[cluster].cluster); + sev(); +} + +/* + * __mcpm_outbound_enter_critical: Enter the cluster teardown critical section. + * This function should be called by the last man, after local CPU teardown + * is complete. CPU cache expected to be active. + * + * Returns: + * false: the critical section was not entered because an inbound CPU was + * observed, or the cluster is already being set up; + * true: the critical section was entered: it is now safe to tear down the + * cluster. + */ +static bool __mcpm_outbound_enter_critical(unsigned int cpu, unsigned int cluster) +{ + unsigned int i; + struct mcpm_sync_struct *c = &mcpm_sync.clusters[cluster]; + + /* Warn inbound CPUs that the cluster is being torn down: */ + c->cluster = CLUSTER_GOING_DOWN; + sync_cache_w(&c->cluster); + + /* Back out if the inbound cluster is already in the critical region: */ + sync_cache_r(&c->inbound); + if (c->inbound == INBOUND_COMING_UP) + goto abort; + + /* + * Wait for all CPUs to get out of the GOING_DOWN state, so that local + * teardown is complete on each CPU before tearing down the cluster. + * + * If any CPU has been woken up again from the DOWN state, then we + * shouldn't be taking the cluster down at all: abort in that case. + */ + sync_cache_r(&c->cpus); + for (i = 0; i < MAX_CPUS_PER_CLUSTER; i++) { + int cpustate; + + if (i == cpu) + continue; + + while (1) { + cpustate = c->cpus[i].cpu; + if (cpustate != CPU_GOING_DOWN) + break; + + wfe(); + sync_cache_r(&c->cpus[i].cpu); + } + + switch (cpustate) { + case CPU_DOWN: + continue; + + default: + goto abort; + } + } + + return true; + +abort: + __mcpm_outbound_leave_critical(cluster, CLUSTER_UP); + return false; +} + +static int __mcpm_cluster_state(unsigned int cluster) +{ + sync_cache_r(&mcpm_sync.clusters[cluster].cluster); + return mcpm_sync.clusters[cluster].cluster; +} + +extern unsigned long mcpm_entry_vectors[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER]; + +void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr) +{ + unsigned long val = ptr ? __pa_symbol(ptr) : 0; + mcpm_entry_vectors[cluster][cpu] = val; + sync_cache_w(&mcpm_entry_vectors[cluster][cpu]); +} + +extern unsigned long mcpm_entry_early_pokes[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER][2]; + +void mcpm_set_early_poke(unsigned cpu, unsigned cluster, + unsigned long poke_phys_addr, unsigned long poke_val) +{ + unsigned long *poke = &mcpm_entry_early_pokes[cluster][cpu][0]; + poke[0] = poke_phys_addr; + poke[1] = poke_val; + __sync_cache_range_w(poke, 2 * sizeof(*poke)); +} + +static const struct mcpm_platform_ops *platform_ops; + +int __init mcpm_platform_register(const struct mcpm_platform_ops *ops) +{ + if (platform_ops) + return -EBUSY; + platform_ops = ops; + return 0; +} + +bool mcpm_is_available(void) +{ + return (platform_ops) ? true : false; +} +EXPORT_SYMBOL_GPL(mcpm_is_available); + +/* + * We can't use regular spinlocks. In the switcher case, it is possible + * for an outbound CPU to call power_down() after its inbound counterpart + * is already live using the same logical CPU number which trips lockdep + * debugging. + */ +static arch_spinlock_t mcpm_lock = __ARCH_SPIN_LOCK_UNLOCKED; + +static int mcpm_cpu_use_count[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER]; + +static inline bool mcpm_cluster_unused(unsigned int cluster) +{ + int i, cnt; + for (i = 0, cnt = 0; i < MAX_CPUS_PER_CLUSTER; i++) + cnt |= mcpm_cpu_use_count[cluster][i]; + return !cnt; +} + +int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster) +{ + bool cpu_is_down, cluster_is_down; + int ret = 0; + + pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); + if (!platform_ops) + return -EUNATCH; /* try not to shadow power_up errors */ + might_sleep(); + + /* + * Since this is called with IRQs enabled, and no arch_spin_lock_irq + * variant exists, we need to disable IRQs manually here. + */ + local_irq_disable(); + arch_spin_lock(&mcpm_lock); + + cpu_is_down = !mcpm_cpu_use_count[cluster][cpu]; + cluster_is_down = mcpm_cluster_unused(cluster); + + mcpm_cpu_use_count[cluster][cpu]++; + /* + * The only possible values are: + * 0 = CPU down + * 1 = CPU (still) up + * 2 = CPU requested to be up before it had a chance + * to actually make itself down. + * Any other value is a bug. + */ + BUG_ON(mcpm_cpu_use_count[cluster][cpu] != 1 && + mcpm_cpu_use_count[cluster][cpu] != 2); + + if (cluster_is_down) + ret = platform_ops->cluster_powerup(cluster); + if (cpu_is_down && !ret) + ret = platform_ops->cpu_powerup(cpu, cluster); + + arch_spin_unlock(&mcpm_lock); + local_irq_enable(); + return ret; +} + +typedef typeof(cpu_reset) phys_reset_t; + +void mcpm_cpu_power_down(void) +{ + unsigned int mpidr, cpu, cluster; + bool cpu_going_down, last_man; + phys_reset_t phys_reset; + + mpidr = read_cpuid_mpidr(); + cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); + cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); + if (WARN_ON_ONCE(!platform_ops)) + return; + BUG_ON(!irqs_disabled()); + + setup_mm_for_reboot(); + + __mcpm_cpu_going_down(cpu, cluster); + arch_spin_lock(&mcpm_lock); + BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP); + + mcpm_cpu_use_count[cluster][cpu]--; + BUG_ON(mcpm_cpu_use_count[cluster][cpu] != 0 && + mcpm_cpu_use_count[cluster][cpu] != 1); + cpu_going_down = !mcpm_cpu_use_count[cluster][cpu]; + last_man = mcpm_cluster_unused(cluster); + + if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { + platform_ops->cpu_powerdown_prepare(cpu, cluster); + platform_ops->cluster_powerdown_prepare(cluster); + arch_spin_unlock(&mcpm_lock); + platform_ops->cluster_cache_disable(); + __mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN); + } else { + if (cpu_going_down) + platform_ops->cpu_powerdown_prepare(cpu, cluster); + arch_spin_unlock(&mcpm_lock); + /* + * If cpu_going_down is false here, that means a power_up + * request raced ahead of us. Even if we do not want to + * shut this CPU down, the caller still expects execution + * to return through the system resume entry path, like + * when the WFI is aborted due to a new IRQ or the like.. + * So let's continue with cache cleaning in all cases. + */ + platform_ops->cpu_cache_disable(); + } + + __mcpm_cpu_down(cpu, cluster); + + /* Now we are prepared for power-down, do it: */ + if (cpu_going_down) + wfi(); + + /* + * It is possible for a power_up request to happen concurrently + * with a power_down request for the same CPU. In this case the + * CPU might not be able to actually enter a powered down state + * with the WFI instruction if the power_up request has removed + * the required reset condition. We must perform a re-entry in + * the kernel as if the power_up method just had deasserted reset + * on the CPU. + */ + phys_reset = (phys_reset_t)(unsigned long)__pa_symbol(cpu_reset); + phys_reset(__pa_symbol(mcpm_entry_point), false); + + /* should never get here */ + BUG(); +} + +int mcpm_wait_for_cpu_powerdown(unsigned int cpu, unsigned int cluster) +{ + int ret; + + if (WARN_ON_ONCE(!platform_ops || !platform_ops->wait_for_powerdown)) + return -EUNATCH; + + ret = platform_ops->wait_for_powerdown(cpu, cluster); + if (ret) + pr_warn("%s: cpu %u, cluster %u failed to power down (%d)\n", + __func__, cpu, cluster, ret); + + return ret; +} + +void mcpm_cpu_suspend(void) +{ + if (WARN_ON_ONCE(!platform_ops)) + return; + + /* Some platforms might have to enable special resume modes, etc. */ + if (platform_ops->cpu_suspend_prepare) { + unsigned int mpidr = read_cpuid_mpidr(); + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + arch_spin_lock(&mcpm_lock); + platform_ops->cpu_suspend_prepare(cpu, cluster); + arch_spin_unlock(&mcpm_lock); + } + mcpm_cpu_power_down(); +} + +int mcpm_cpu_powered_up(void) +{ + unsigned int mpidr, cpu, cluster; + bool cpu_was_down, first_man; + unsigned long flags; + + if (!platform_ops) + return -EUNATCH; + + mpidr = read_cpuid_mpidr(); + cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); + cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + local_irq_save(flags); + arch_spin_lock(&mcpm_lock); + + cpu_was_down = !mcpm_cpu_use_count[cluster][cpu]; + first_man = mcpm_cluster_unused(cluster); + + if (first_man && platform_ops->cluster_is_up) + platform_ops->cluster_is_up(cluster); + if (cpu_was_down) + mcpm_cpu_use_count[cluster][cpu] = 1; + if (platform_ops->cpu_is_up) + platform_ops->cpu_is_up(cpu, cluster); + + arch_spin_unlock(&mcpm_lock); + local_irq_restore(flags); + + return 0; +} + +#ifdef CONFIG_ARM_CPU_SUSPEND + +static int __init nocache_trampoline(unsigned long _arg) +{ + void (*cache_disable)(void) = (void *)_arg; + unsigned int mpidr = read_cpuid_mpidr(); + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + phys_reset_t phys_reset; + + mcpm_set_entry_vector(cpu, cluster, cpu_resume_no_hyp); + setup_mm_for_reboot(); + + __mcpm_cpu_going_down(cpu, cluster); + BUG_ON(!__mcpm_outbound_enter_critical(cpu, cluster)); + cache_disable(); + __mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN); + __mcpm_cpu_down(cpu, cluster); + + phys_reset = (phys_reset_t)(unsigned long)__pa_symbol(cpu_reset); + phys_reset(__pa_symbol(mcpm_entry_point), false); + BUG(); +} + +int __init mcpm_loopback(void (*cache_disable)(void)) +{ + int ret; + + /* + * We're going to soft-restart the current CPU through the + * low-level MCPM code by leveraging the suspend/resume + * infrastructure. Let's play it safe by using cpu_pm_enter() + * in case the CPU init code path resets the VFP or similar. + */ + local_irq_disable(); + local_fiq_disable(); + ret = cpu_pm_enter(); + if (!ret) { + ret = cpu_suspend((unsigned long)cache_disable, nocache_trampoline); + cpu_pm_exit(); + } + local_fiq_enable(); + local_irq_enable(); + if (ret) + pr_err("%s returned %d\n", __func__, ret); + return ret; +} + +#endif + +extern unsigned long mcpm_power_up_setup_phys; + +int __init mcpm_sync_init( + void (*power_up_setup)(unsigned int affinity_level)) +{ + unsigned int i, j, mpidr, this_cluster; + + BUILD_BUG_ON(MCPM_SYNC_CLUSTER_SIZE * MAX_NR_CLUSTERS != sizeof mcpm_sync); + BUG_ON((unsigned long)&mcpm_sync & (__CACHE_WRITEBACK_GRANULE - 1)); + + /* + * Set initial CPU and cluster states. + * Only one cluster is assumed to be active at this point. + */ + for (i = 0; i < MAX_NR_CLUSTERS; i++) { + mcpm_sync.clusters[i].cluster = CLUSTER_DOWN; + mcpm_sync.clusters[i].inbound = INBOUND_NOT_COMING_UP; + for (j = 0; j < MAX_CPUS_PER_CLUSTER; j++) + mcpm_sync.clusters[i].cpus[j].cpu = CPU_DOWN; + } + mpidr = read_cpuid_mpidr(); + this_cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + for_each_online_cpu(i) { + mcpm_cpu_use_count[this_cluster][i] = 1; + mcpm_sync.clusters[this_cluster].cpus[i].cpu = CPU_UP; + } + mcpm_sync.clusters[this_cluster].cluster = CLUSTER_UP; + sync_cache_w(&mcpm_sync); + + if (power_up_setup) { + mcpm_power_up_setup_phys = __pa_symbol(power_up_setup); + sync_cache_w(&mcpm_power_up_setup_phys); + } + + return 0; +} |