diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /fs/pstore/ram_core.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'fs/pstore/ram_core.c')
-rw-r--r-- | fs/pstore/ram_core.c | 621 |
1 files changed, 621 insertions, 0 deletions
diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c new file mode 100644 index 000000000..966191d3a --- /dev/null +++ b/fs/pstore/ram_core.c @@ -0,0 +1,621 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2012 Google, Inc. + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include <linux/device.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/init.h> +#include <linux/io.h> +#include <linux/kernel.h> +#include <linux/list.h> +#include <linux/memblock.h> +#include <linux/rslib.h> +#include <linux/slab.h> +#include <linux/uaccess.h> +#include <linux/vmalloc.h> +#include <asm/page.h> + +#include "ram_internal.h" + +/** + * struct persistent_ram_buffer - persistent circular RAM buffer + * + * @sig: + * signature to indicate header (PERSISTENT_RAM_SIG xor PRZ-type value) + * @start: + * offset into @data where the beginning of the stored bytes begin + * @size: + * number of valid bytes stored in @data + */ +struct persistent_ram_buffer { + uint32_t sig; + atomic_t start; + atomic_t size; + uint8_t data[]; +}; + +#define PERSISTENT_RAM_SIG (0x43474244) /* DBGC */ + +static inline size_t buffer_size(struct persistent_ram_zone *prz) +{ + return atomic_read(&prz->buffer->size); +} + +static inline size_t buffer_start(struct persistent_ram_zone *prz) +{ + return atomic_read(&prz->buffer->start); +} + +/* increase and wrap the start pointer, returning the old value */ +static size_t buffer_start_add(struct persistent_ram_zone *prz, size_t a) +{ + int old; + int new; + unsigned long flags = 0; + + if (!(prz->flags & PRZ_FLAG_NO_LOCK)) + raw_spin_lock_irqsave(&prz->buffer_lock, flags); + + old = atomic_read(&prz->buffer->start); + new = old + a; + while (unlikely(new >= prz->buffer_size)) + new -= prz->buffer_size; + atomic_set(&prz->buffer->start, new); + + if (!(prz->flags & PRZ_FLAG_NO_LOCK)) + raw_spin_unlock_irqrestore(&prz->buffer_lock, flags); + + return old; +} + +/* increase the size counter until it hits the max size */ +static void buffer_size_add(struct persistent_ram_zone *prz, size_t a) +{ + size_t old; + size_t new; + unsigned long flags = 0; + + if (!(prz->flags & PRZ_FLAG_NO_LOCK)) + raw_spin_lock_irqsave(&prz->buffer_lock, flags); + + old = atomic_read(&prz->buffer->size); + if (old == prz->buffer_size) + goto exit; + + new = old + a; + if (new > prz->buffer_size) + new = prz->buffer_size; + atomic_set(&prz->buffer->size, new); + +exit: + if (!(prz->flags & PRZ_FLAG_NO_LOCK)) + raw_spin_unlock_irqrestore(&prz->buffer_lock, flags); +} + +static void notrace persistent_ram_encode_rs8(struct persistent_ram_zone *prz, + uint8_t *data, size_t len, uint8_t *ecc) +{ + int i; + + /* Initialize the parity buffer */ + memset(prz->ecc_info.par, 0, + prz->ecc_info.ecc_size * sizeof(prz->ecc_info.par[0])); + encode_rs8(prz->rs_decoder, data, len, prz->ecc_info.par, 0); + for (i = 0; i < prz->ecc_info.ecc_size; i++) + ecc[i] = prz->ecc_info.par[i]; +} + +static int persistent_ram_decode_rs8(struct persistent_ram_zone *prz, + void *data, size_t len, uint8_t *ecc) +{ + int i; + + for (i = 0; i < prz->ecc_info.ecc_size; i++) + prz->ecc_info.par[i] = ecc[i]; + return decode_rs8(prz->rs_decoder, data, prz->ecc_info.par, len, + NULL, 0, NULL, 0, NULL); +} + +static void notrace persistent_ram_update_ecc(struct persistent_ram_zone *prz, + unsigned int start, unsigned int count) +{ + struct persistent_ram_buffer *buffer = prz->buffer; + uint8_t *buffer_end = buffer->data + prz->buffer_size; + uint8_t *block; + uint8_t *par; + int ecc_block_size = prz->ecc_info.block_size; + int ecc_size = prz->ecc_info.ecc_size; + int size = ecc_block_size; + + if (!ecc_size) + return; + + block = buffer->data + (start & ~(ecc_block_size - 1)); + par = prz->par_buffer + (start / ecc_block_size) * ecc_size; + + do { + if (block + ecc_block_size > buffer_end) + size = buffer_end - block; + persistent_ram_encode_rs8(prz, block, size, par); + block += ecc_block_size; + par += ecc_size; + } while (block < buffer->data + start + count); +} + +static void persistent_ram_update_header_ecc(struct persistent_ram_zone *prz) +{ + struct persistent_ram_buffer *buffer = prz->buffer; + + if (!prz->ecc_info.ecc_size) + return; + + persistent_ram_encode_rs8(prz, (uint8_t *)buffer, sizeof(*buffer), + prz->par_header); +} + +static void persistent_ram_ecc_old(struct persistent_ram_zone *prz) +{ + struct persistent_ram_buffer *buffer = prz->buffer; + uint8_t *block; + uint8_t *par; + + if (!prz->ecc_info.ecc_size) + return; + + block = buffer->data; + par = prz->par_buffer; + while (block < buffer->data + buffer_size(prz)) { + int numerr; + int size = prz->ecc_info.block_size; + if (block + size > buffer->data + prz->buffer_size) + size = buffer->data + prz->buffer_size - block; + numerr = persistent_ram_decode_rs8(prz, block, size, par); + if (numerr > 0) { + pr_devel("error in block %p, %d\n", block, numerr); + prz->corrected_bytes += numerr; + } else if (numerr < 0) { + pr_devel("uncorrectable error in block %p\n", block); + prz->bad_blocks++; + } + block += prz->ecc_info.block_size; + par += prz->ecc_info.ecc_size; + } +} + +static int persistent_ram_init_ecc(struct persistent_ram_zone *prz, + struct persistent_ram_ecc_info *ecc_info) +{ + int numerr; + struct persistent_ram_buffer *buffer = prz->buffer; + int ecc_blocks; + size_t ecc_total; + + if (!ecc_info || !ecc_info->ecc_size) + return 0; + + prz->ecc_info.block_size = ecc_info->block_size ?: 128; + prz->ecc_info.ecc_size = ecc_info->ecc_size ?: 16; + prz->ecc_info.symsize = ecc_info->symsize ?: 8; + prz->ecc_info.poly = ecc_info->poly ?: 0x11d; + + ecc_blocks = DIV_ROUND_UP(prz->buffer_size - prz->ecc_info.ecc_size, + prz->ecc_info.block_size + + prz->ecc_info.ecc_size); + ecc_total = (ecc_blocks + 1) * prz->ecc_info.ecc_size; + if (ecc_total >= prz->buffer_size) { + pr_err("%s: invalid ecc_size %u (total %zu, buffer size %zu)\n", + __func__, prz->ecc_info.ecc_size, + ecc_total, prz->buffer_size); + return -EINVAL; + } + + prz->buffer_size -= ecc_total; + prz->par_buffer = buffer->data + prz->buffer_size; + prz->par_header = prz->par_buffer + + ecc_blocks * prz->ecc_info.ecc_size; + + /* + * first consecutive root is 0 + * primitive element to generate roots = 1 + */ + prz->rs_decoder = init_rs(prz->ecc_info.symsize, prz->ecc_info.poly, + 0, 1, prz->ecc_info.ecc_size); + if (prz->rs_decoder == NULL) { + pr_info("init_rs failed\n"); + return -EINVAL; + } + + /* allocate workspace instead of using stack VLA */ + prz->ecc_info.par = kmalloc_array(prz->ecc_info.ecc_size, + sizeof(*prz->ecc_info.par), + GFP_KERNEL); + if (!prz->ecc_info.par) { + pr_err("cannot allocate ECC parity workspace\n"); + return -ENOMEM; + } + + prz->corrected_bytes = 0; + prz->bad_blocks = 0; + + numerr = persistent_ram_decode_rs8(prz, buffer, sizeof(*buffer), + prz->par_header); + if (numerr > 0) { + pr_info("error in header, %d\n", numerr); + prz->corrected_bytes += numerr; + } else if (numerr < 0) { + pr_info_ratelimited("uncorrectable error in header\n"); + prz->bad_blocks++; + } + + return 0; +} + +ssize_t persistent_ram_ecc_string(struct persistent_ram_zone *prz, + char *str, size_t len) +{ + ssize_t ret; + + if (!prz->ecc_info.ecc_size) + return 0; + + if (prz->corrected_bytes || prz->bad_blocks) + ret = snprintf(str, len, "" + "\nECC: %d Corrected bytes, %d unrecoverable blocks\n", + prz->corrected_bytes, prz->bad_blocks); + else + ret = snprintf(str, len, "\nECC: No errors detected\n"); + + return ret; +} + +static void notrace persistent_ram_update(struct persistent_ram_zone *prz, + const void *s, unsigned int start, unsigned int count) +{ + struct persistent_ram_buffer *buffer = prz->buffer; + memcpy_toio(buffer->data + start, s, count); + persistent_ram_update_ecc(prz, start, count); +} + +static int notrace persistent_ram_update_user(struct persistent_ram_zone *prz, + const void __user *s, unsigned int start, unsigned int count) +{ + struct persistent_ram_buffer *buffer = prz->buffer; + int ret = unlikely(copy_from_user(buffer->data + start, s, count)) ? + -EFAULT : 0; + persistent_ram_update_ecc(prz, start, count); + return ret; +} + +void persistent_ram_save_old(struct persistent_ram_zone *prz) +{ + struct persistent_ram_buffer *buffer = prz->buffer; + size_t size = buffer_size(prz); + size_t start = buffer_start(prz); + + if (!size) + return; + + if (!prz->old_log) { + persistent_ram_ecc_old(prz); + prz->old_log = kmalloc(size, GFP_KERNEL); + } + if (!prz->old_log) { + pr_err("failed to allocate buffer\n"); + return; + } + + prz->old_log_size = size; + memcpy_fromio(prz->old_log, &buffer->data[start], size - start); + memcpy_fromio(prz->old_log + size - start, &buffer->data[0], start); +} + +int notrace persistent_ram_write(struct persistent_ram_zone *prz, + const void *s, unsigned int count) +{ + int rem; + int c = count; + size_t start; + + if (unlikely(c > prz->buffer_size)) { + s += c - prz->buffer_size; + c = prz->buffer_size; + } + + buffer_size_add(prz, c); + + start = buffer_start_add(prz, c); + + rem = prz->buffer_size - start; + if (unlikely(rem < c)) { + persistent_ram_update(prz, s, start, rem); + s += rem; + c -= rem; + start = 0; + } + persistent_ram_update(prz, s, start, c); + + persistent_ram_update_header_ecc(prz); + + return count; +} + +int notrace persistent_ram_write_user(struct persistent_ram_zone *prz, + const void __user *s, unsigned int count) +{ + int rem, ret = 0, c = count; + size_t start; + + if (unlikely(c > prz->buffer_size)) { + s += c - prz->buffer_size; + c = prz->buffer_size; + } + + buffer_size_add(prz, c); + + start = buffer_start_add(prz, c); + + rem = prz->buffer_size - start; + if (unlikely(rem < c)) { + ret = persistent_ram_update_user(prz, s, start, rem); + s += rem; + c -= rem; + start = 0; + } + if (likely(!ret)) + ret = persistent_ram_update_user(prz, s, start, c); + + persistent_ram_update_header_ecc(prz); + + return unlikely(ret) ? ret : count; +} + +size_t persistent_ram_old_size(struct persistent_ram_zone *prz) +{ + return prz->old_log_size; +} + +void *persistent_ram_old(struct persistent_ram_zone *prz) +{ + return prz->old_log; +} + +void persistent_ram_free_old(struct persistent_ram_zone *prz) +{ + kfree(prz->old_log); + prz->old_log = NULL; + prz->old_log_size = 0; +} + +void persistent_ram_zap(struct persistent_ram_zone *prz) +{ + atomic_set(&prz->buffer->start, 0); + atomic_set(&prz->buffer->size, 0); + persistent_ram_update_header_ecc(prz); +} + +#define MEM_TYPE_WCOMBINE 0 +#define MEM_TYPE_NONCACHED 1 +#define MEM_TYPE_NORMAL 2 + +static void *persistent_ram_vmap(phys_addr_t start, size_t size, + unsigned int memtype) +{ + struct page **pages; + phys_addr_t page_start; + unsigned int page_count; + pgprot_t prot; + unsigned int i; + void *vaddr; + + page_start = start - offset_in_page(start); + page_count = DIV_ROUND_UP(size + offset_in_page(start), PAGE_SIZE); + + switch (memtype) { + case MEM_TYPE_NORMAL: + prot = PAGE_KERNEL; + break; + case MEM_TYPE_NONCACHED: + prot = pgprot_noncached(PAGE_KERNEL); + break; + case MEM_TYPE_WCOMBINE: + prot = pgprot_writecombine(PAGE_KERNEL); + break; + default: + pr_err("invalid mem_type=%d\n", memtype); + return NULL; + } + + pages = kmalloc_array(page_count, sizeof(struct page *), GFP_KERNEL); + if (!pages) { + pr_err("%s: Failed to allocate array for %u pages\n", + __func__, page_count); + return NULL; + } + + for (i = 0; i < page_count; i++) { + phys_addr_t addr = page_start + i * PAGE_SIZE; + pages[i] = pfn_to_page(addr >> PAGE_SHIFT); + } + /* + * VM_IOREMAP used here to bypass this region during vread() + * and kmap_atomic() (i.e. kcore) to avoid __va() failures. + */ + vaddr = vmap(pages, page_count, VM_MAP | VM_IOREMAP, prot); + kfree(pages); + + /* + * Since vmap() uses page granularity, we must add the offset + * into the page here, to get the byte granularity address + * into the mapping to represent the actual "start" location. + */ + return vaddr + offset_in_page(start); +} + +static void *persistent_ram_iomap(phys_addr_t start, size_t size, + unsigned int memtype, char *label) +{ + void *va; + + if (!request_mem_region(start, size, label ?: "ramoops")) { + pr_err("request mem region (%s 0x%llx@0x%llx) failed\n", + label ?: "ramoops", + (unsigned long long)size, (unsigned long long)start); + return NULL; + } + + if (memtype) + va = ioremap(start, size); + else + va = ioremap_wc(start, size); + + /* + * Since request_mem_region() and ioremap() are byte-granularity + * there is no need handle anything special like we do when the + * vmap() case in persistent_ram_vmap() above. + */ + return va; +} + +static int persistent_ram_buffer_map(phys_addr_t start, phys_addr_t size, + struct persistent_ram_zone *prz, int memtype) +{ + prz->paddr = start; + prz->size = size; + + if (pfn_valid(start >> PAGE_SHIFT)) + prz->vaddr = persistent_ram_vmap(start, size, memtype); + else + prz->vaddr = persistent_ram_iomap(start, size, memtype, + prz->label); + + if (!prz->vaddr) { + pr_err("%s: Failed to map 0x%llx pages at 0x%llx\n", __func__, + (unsigned long long)size, (unsigned long long)start); + return -ENOMEM; + } + + prz->buffer = prz->vaddr; + prz->buffer_size = size - sizeof(struct persistent_ram_buffer); + + return 0; +} + +static int persistent_ram_post_init(struct persistent_ram_zone *prz, u32 sig, + struct persistent_ram_ecc_info *ecc_info) +{ + int ret; + bool zap = !!(prz->flags & PRZ_FLAG_ZAP_OLD); + + ret = persistent_ram_init_ecc(prz, ecc_info); + if (ret) { + pr_warn("ECC failed %s\n", prz->label); + return ret; + } + + sig ^= PERSISTENT_RAM_SIG; + + if (prz->buffer->sig == sig) { + if (buffer_size(prz) == 0) { + pr_debug("found existing empty buffer\n"); + return 0; + } + + if (buffer_size(prz) > prz->buffer_size || + buffer_start(prz) > buffer_size(prz)) { + pr_info("found existing invalid buffer, size %zu, start %zu\n", + buffer_size(prz), buffer_start(prz)); + zap = true; + } else { + pr_debug("found existing buffer, size %zu, start %zu\n", + buffer_size(prz), buffer_start(prz)); + persistent_ram_save_old(prz); + } + } else { + pr_debug("no valid data in buffer (sig = 0x%08x)\n", + prz->buffer->sig); + prz->buffer->sig = sig; + zap = true; + } + + /* Reset missing, invalid, or single-use memory area. */ + if (zap) + persistent_ram_zap(prz); + + return 0; +} + +void persistent_ram_free(struct persistent_ram_zone **_prz) +{ + struct persistent_ram_zone *prz; + + if (!_prz) + return; + + prz = *_prz; + if (!prz) + return; + + if (prz->vaddr) { + if (pfn_valid(prz->paddr >> PAGE_SHIFT)) { + /* We must vunmap() at page-granularity. */ + vunmap(prz->vaddr - offset_in_page(prz->paddr)); + } else { + iounmap(prz->vaddr); + release_mem_region(prz->paddr, prz->size); + } + prz->vaddr = NULL; + } + if (prz->rs_decoder) { + free_rs(prz->rs_decoder); + prz->rs_decoder = NULL; + } + kfree(prz->ecc_info.par); + prz->ecc_info.par = NULL; + + persistent_ram_free_old(prz); + kfree(prz->label); + kfree(prz); + *_prz = NULL; +} + +struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size, + u32 sig, struct persistent_ram_ecc_info *ecc_info, + unsigned int memtype, u32 flags, char *label) +{ + struct persistent_ram_zone *prz; + int ret = -ENOMEM; + + prz = kzalloc(sizeof(struct persistent_ram_zone), GFP_KERNEL); + if (!prz) { + pr_err("failed to allocate persistent ram zone\n"); + goto err; + } + + /* Initialize general buffer state. */ + raw_spin_lock_init(&prz->buffer_lock); + prz->flags = flags; + prz->label = kstrdup(label, GFP_KERNEL); + + ret = persistent_ram_buffer_map(start, size, prz, memtype); + if (ret) + goto err; + + ret = persistent_ram_post_init(prz, sig, ecc_info); + if (ret) + goto err; + + pr_debug("attached %s 0x%zx@0x%llx: %zu header, %zu data, %zu ecc (%d/%d)\n", + prz->label, prz->size, (unsigned long long)prz->paddr, + sizeof(*prz->buffer), prz->buffer_size, + prz->size - sizeof(*prz->buffer) - prz->buffer_size, + prz->ecc_info.ecc_size, prz->ecc_info.block_size); + + return prz; +err: + persistent_ram_free(&prz); + return ERR_PTR(ret); +} |