diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/soc/qcom/pdr_interface.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'drivers/soc/qcom/pdr_interface.c')
-rw-r--r-- | drivers/soc/qcom/pdr_interface.c | 755 |
1 files changed, 755 insertions, 0 deletions
diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c new file mode 100644 index 000000000..0034af927 --- /dev/null +++ b/drivers/soc/qcom/pdr_interface.c @@ -0,0 +1,755 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020 The Linux Foundation. All rights reserved. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/slab.h> +#include <linux/string.h> +#include <linux/workqueue.h> + +#include "pdr_internal.h" + +struct pdr_service { + char service_name[SERVREG_NAME_LENGTH + 1]; + char service_path[SERVREG_NAME_LENGTH + 1]; + + struct sockaddr_qrtr addr; + + unsigned int instance; + unsigned int service; + u8 service_data_valid; + u32 service_data; + int state; + + bool need_notifier_register; + bool need_notifier_remove; + bool need_locator_lookup; + bool service_connected; + + struct list_head node; +}; + +struct pdr_handle { + struct qmi_handle locator_hdl; + struct qmi_handle notifier_hdl; + + struct sockaddr_qrtr locator_addr; + + struct list_head lookups; + struct list_head indack_list; + + /* control access to pdr lookup/indack lists */ + struct mutex list_lock; + + /* serialize pd status invocation */ + struct mutex status_lock; + + /* control access to the locator state */ + struct mutex lock; + + bool locator_init_complete; + + struct work_struct locator_work; + struct work_struct notifier_work; + struct work_struct indack_work; + + struct workqueue_struct *notifier_wq; + struct workqueue_struct *indack_wq; + + void (*status)(int state, char *service_path, void *priv); + void *priv; +}; + +struct pdr_list_node { + enum servreg_service_state curr_state; + u16 transaction_id; + struct pdr_service *pds; + struct list_head node; +}; + +static int pdr_locator_new_server(struct qmi_handle *qmi, + struct qmi_service *svc) +{ + struct pdr_handle *pdr = container_of(qmi, struct pdr_handle, + locator_hdl); + struct pdr_service *pds; + + /* Create a local client port for QMI communication */ + pdr->locator_addr.sq_family = AF_QIPCRTR; + pdr->locator_addr.sq_node = svc->node; + pdr->locator_addr.sq_port = svc->port; + + mutex_lock(&pdr->lock); + pdr->locator_init_complete = true; + mutex_unlock(&pdr->lock); + + /* Service pending lookup requests */ + mutex_lock(&pdr->list_lock); + list_for_each_entry(pds, &pdr->lookups, node) { + if (pds->need_locator_lookup) + schedule_work(&pdr->locator_work); + } + mutex_unlock(&pdr->list_lock); + + return 0; +} + +static void pdr_locator_del_server(struct qmi_handle *qmi, + struct qmi_service *svc) +{ + struct pdr_handle *pdr = container_of(qmi, struct pdr_handle, + locator_hdl); + + mutex_lock(&pdr->lock); + pdr->locator_init_complete = false; + mutex_unlock(&pdr->lock); + + pdr->locator_addr.sq_node = 0; + pdr->locator_addr.sq_port = 0; +} + +static const struct qmi_ops pdr_locator_ops = { + .new_server = pdr_locator_new_server, + .del_server = pdr_locator_del_server, +}; + +static int pdr_register_listener(struct pdr_handle *pdr, + struct pdr_service *pds, + bool enable) +{ + struct servreg_register_listener_resp resp; + struct servreg_register_listener_req req; + struct qmi_txn txn; + int ret; + + ret = qmi_txn_init(&pdr->notifier_hdl, &txn, + servreg_register_listener_resp_ei, + &resp); + if (ret < 0) + return ret; + + req.enable = enable; + strscpy(req.service_path, pds->service_path, sizeof(req.service_path)); + + ret = qmi_send_request(&pdr->notifier_hdl, &pds->addr, + &txn, SERVREG_REGISTER_LISTENER_REQ, + SERVREG_REGISTER_LISTENER_REQ_LEN, + servreg_register_listener_req_ei, + &req); + if (ret < 0) { + qmi_txn_cancel(&txn); + return ret; + } + + ret = qmi_txn_wait(&txn, 5 * HZ); + if (ret < 0) { + pr_err("PDR: %s register listener txn wait failed: %d\n", + pds->service_path, ret); + return ret; + } + + if (resp.resp.result != QMI_RESULT_SUCCESS_V01) { + pr_err("PDR: %s register listener failed: 0x%x\n", + pds->service_path, resp.resp.error); + return -EREMOTEIO; + } + + pds->state = resp.curr_state; + + return 0; +} + +static void pdr_notifier_work(struct work_struct *work) +{ + struct pdr_handle *pdr = container_of(work, struct pdr_handle, + notifier_work); + struct pdr_service *pds; + int ret; + + mutex_lock(&pdr->list_lock); + list_for_each_entry(pds, &pdr->lookups, node) { + if (pds->service_connected) { + if (!pds->need_notifier_register) + continue; + + pds->need_notifier_register = false; + ret = pdr_register_listener(pdr, pds, true); + if (ret < 0) + pds->state = SERVREG_SERVICE_STATE_DOWN; + } else { + if (!pds->need_notifier_remove) + continue; + + pds->need_notifier_remove = false; + pds->state = SERVREG_SERVICE_STATE_DOWN; + } + + mutex_lock(&pdr->status_lock); + pdr->status(pds->state, pds->service_path, pdr->priv); + mutex_unlock(&pdr->status_lock); + } + mutex_unlock(&pdr->list_lock); +} + +static int pdr_notifier_new_server(struct qmi_handle *qmi, + struct qmi_service *svc) +{ + struct pdr_handle *pdr = container_of(qmi, struct pdr_handle, + notifier_hdl); + struct pdr_service *pds; + + mutex_lock(&pdr->list_lock); + list_for_each_entry(pds, &pdr->lookups, node) { + if (pds->service == svc->service && + pds->instance == svc->instance) { + pds->service_connected = true; + pds->need_notifier_register = true; + pds->addr.sq_family = AF_QIPCRTR; + pds->addr.sq_node = svc->node; + pds->addr.sq_port = svc->port; + queue_work(pdr->notifier_wq, &pdr->notifier_work); + } + } + mutex_unlock(&pdr->list_lock); + + return 0; +} + +static void pdr_notifier_del_server(struct qmi_handle *qmi, + struct qmi_service *svc) +{ + struct pdr_handle *pdr = container_of(qmi, struct pdr_handle, + notifier_hdl); + struct pdr_service *pds; + + mutex_lock(&pdr->list_lock); + list_for_each_entry(pds, &pdr->lookups, node) { + if (pds->service == svc->service && + pds->instance == svc->instance) { + pds->service_connected = false; + pds->need_notifier_remove = true; + pds->addr.sq_node = 0; + pds->addr.sq_port = 0; + queue_work(pdr->notifier_wq, &pdr->notifier_work); + } + } + mutex_unlock(&pdr->list_lock); +} + +static const struct qmi_ops pdr_notifier_ops = { + .new_server = pdr_notifier_new_server, + .del_server = pdr_notifier_del_server, +}; + +static int pdr_send_indack_msg(struct pdr_handle *pdr, struct pdr_service *pds, + u16 tid) +{ + struct servreg_set_ack_resp resp; + struct servreg_set_ack_req req; + struct qmi_txn txn; + int ret; + + ret = qmi_txn_init(&pdr->notifier_hdl, &txn, servreg_set_ack_resp_ei, + &resp); + if (ret < 0) + return ret; + + req.transaction_id = tid; + strscpy(req.service_path, pds->service_path, sizeof(req.service_path)); + + ret = qmi_send_request(&pdr->notifier_hdl, &pds->addr, + &txn, SERVREG_SET_ACK_REQ, + SERVREG_SET_ACK_REQ_LEN, + servreg_set_ack_req_ei, + &req); + + /* Skip waiting for response */ + qmi_txn_cancel(&txn); + return ret; +} + +static void pdr_indack_work(struct work_struct *work) +{ + struct pdr_handle *pdr = container_of(work, struct pdr_handle, + indack_work); + struct pdr_list_node *ind, *tmp; + struct pdr_service *pds; + + list_for_each_entry_safe(ind, tmp, &pdr->indack_list, node) { + pds = ind->pds; + + mutex_lock(&pdr->status_lock); + pds->state = ind->curr_state; + pdr->status(pds->state, pds->service_path, pdr->priv); + mutex_unlock(&pdr->status_lock); + + /* Ack the indication after clients release the PD resources */ + pdr_send_indack_msg(pdr, pds, ind->transaction_id); + + mutex_lock(&pdr->list_lock); + list_del(&ind->node); + mutex_unlock(&pdr->list_lock); + + kfree(ind); + } +} + +static void pdr_indication_cb(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, const void *data) +{ + struct pdr_handle *pdr = container_of(qmi, struct pdr_handle, + notifier_hdl); + const struct servreg_state_updated_ind *ind_msg = data; + struct pdr_list_node *ind; + struct pdr_service *pds = NULL, *iter; + + if (!ind_msg || !ind_msg->service_path[0] || + strlen(ind_msg->service_path) > SERVREG_NAME_LENGTH) + return; + + mutex_lock(&pdr->list_lock); + list_for_each_entry(iter, &pdr->lookups, node) { + if (strcmp(iter->service_path, ind_msg->service_path)) + continue; + + pds = iter; + break; + } + mutex_unlock(&pdr->list_lock); + + if (!pds) + return; + + pr_info("PDR: Indication received from %s, state: 0x%x, trans-id: %d\n", + ind_msg->service_path, ind_msg->curr_state, + ind_msg->transaction_id); + + ind = kzalloc(sizeof(*ind), GFP_KERNEL); + if (!ind) + return; + + ind->transaction_id = ind_msg->transaction_id; + ind->curr_state = ind_msg->curr_state; + ind->pds = pds; + + mutex_lock(&pdr->list_lock); + list_add_tail(&ind->node, &pdr->indack_list); + mutex_unlock(&pdr->list_lock); + + queue_work(pdr->indack_wq, &pdr->indack_work); +} + +static const struct qmi_msg_handler qmi_indication_handler[] = { + { + .type = QMI_INDICATION, + .msg_id = SERVREG_STATE_UPDATED_IND_ID, + .ei = servreg_state_updated_ind_ei, + .decoded_size = sizeof(struct servreg_state_updated_ind), + .fn = pdr_indication_cb, + }, + {} +}; + +static int pdr_get_domain_list(struct servreg_get_domain_list_req *req, + struct servreg_get_domain_list_resp *resp, + struct pdr_handle *pdr) +{ + struct qmi_txn txn; + int ret; + + ret = qmi_txn_init(&pdr->locator_hdl, &txn, + servreg_get_domain_list_resp_ei, resp); + if (ret < 0) + return ret; + + ret = qmi_send_request(&pdr->locator_hdl, + &pdr->locator_addr, + &txn, SERVREG_GET_DOMAIN_LIST_REQ, + SERVREG_GET_DOMAIN_LIST_REQ_MAX_LEN, + servreg_get_domain_list_req_ei, + req); + if (ret < 0) { + qmi_txn_cancel(&txn); + return ret; + } + + ret = qmi_txn_wait(&txn, 5 * HZ); + if (ret < 0) { + pr_err("PDR: %s get domain list txn wait failed: %d\n", + req->service_name, ret); + return ret; + } + + if (resp->resp.result != QMI_RESULT_SUCCESS_V01) { + pr_err("PDR: %s get domain list failed: 0x%x\n", + req->service_name, resp->resp.error); + return -EREMOTEIO; + } + + return 0; +} + +static int pdr_locate_service(struct pdr_handle *pdr, struct pdr_service *pds) +{ + struct servreg_get_domain_list_resp *resp; + struct servreg_get_domain_list_req req; + struct servreg_location_entry *entry; + int domains_read = 0; + int ret, i; + + resp = kzalloc(sizeof(*resp), GFP_KERNEL); + if (!resp) + return -ENOMEM; + + /* Prepare req message */ + strscpy(req.service_name, pds->service_name, sizeof(req.service_name)); + req.domain_offset_valid = true; + req.domain_offset = 0; + + do { + req.domain_offset = domains_read; + ret = pdr_get_domain_list(&req, resp, pdr); + if (ret < 0) + goto out; + + for (i = domains_read; i < resp->domain_list_len; i++) { + entry = &resp->domain_list[i]; + + if (strnlen(entry->name, sizeof(entry->name)) == sizeof(entry->name)) + continue; + + if (!strcmp(entry->name, pds->service_path)) { + pds->service_data_valid = entry->service_data_valid; + pds->service_data = entry->service_data; + pds->instance = entry->instance; + goto out; + } + } + + /* Update ret to indicate that the service is not yet found */ + ret = -ENXIO; + + /* Always read total_domains from the response msg */ + if (resp->domain_list_len > resp->total_domains) + resp->domain_list_len = resp->total_domains; + + domains_read += resp->domain_list_len; + } while (domains_read < resp->total_domains); +out: + kfree(resp); + return ret; +} + +static void pdr_notify_lookup_failure(struct pdr_handle *pdr, + struct pdr_service *pds, + int err) +{ + pr_err("PDR: service lookup for %s failed: %d\n", + pds->service_name, err); + + if (err == -ENXIO) + return; + + list_del(&pds->node); + pds->state = SERVREG_LOCATOR_ERR; + mutex_lock(&pdr->status_lock); + pdr->status(pds->state, pds->service_path, pdr->priv); + mutex_unlock(&pdr->status_lock); + kfree(pds); +} + +static void pdr_locator_work(struct work_struct *work) +{ + struct pdr_handle *pdr = container_of(work, struct pdr_handle, + locator_work); + struct pdr_service *pds, *tmp; + int ret = 0; + + /* Bail out early if the SERVREG LOCATOR QMI service is not up */ + mutex_lock(&pdr->lock); + if (!pdr->locator_init_complete) { + mutex_unlock(&pdr->lock); + pr_debug("PDR: SERVICE LOCATOR service not available\n"); + return; + } + mutex_unlock(&pdr->lock); + + mutex_lock(&pdr->list_lock); + list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) { + if (!pds->need_locator_lookup) + continue; + + ret = pdr_locate_service(pdr, pds); + if (ret < 0) { + pdr_notify_lookup_failure(pdr, pds, ret); + continue; + } + + ret = qmi_add_lookup(&pdr->notifier_hdl, pds->service, 1, + pds->instance); + if (ret < 0) { + pdr_notify_lookup_failure(pdr, pds, ret); + continue; + } + + pds->need_locator_lookup = false; + } + mutex_unlock(&pdr->list_lock); +} + +/** + * pdr_add_lookup() - register a tracking request for a PD + * @pdr: PDR client handle + * @service_name: service name of the tracking request + * @service_path: service path of the tracking request + * + * Registering a pdr lookup allows for tracking the life cycle of the PD. + * + * Return: pdr_service object on success, ERR_PTR on failure. -EALREADY is + * returned if a lookup is already in progress for the given service path. + */ +struct pdr_service *pdr_add_lookup(struct pdr_handle *pdr, + const char *service_name, + const char *service_path) +{ + struct pdr_service *pds, *tmp; + int ret; + + if (IS_ERR_OR_NULL(pdr)) + return ERR_PTR(-EINVAL); + + if (!service_name || strlen(service_name) > SERVREG_NAME_LENGTH || + !service_path || strlen(service_path) > SERVREG_NAME_LENGTH) + return ERR_PTR(-EINVAL); + + pds = kzalloc(sizeof(*pds), GFP_KERNEL); + if (!pds) + return ERR_PTR(-ENOMEM); + + pds->service = SERVREG_NOTIFIER_SERVICE; + strscpy(pds->service_name, service_name, sizeof(pds->service_name)); + strscpy(pds->service_path, service_path, sizeof(pds->service_path)); + pds->need_locator_lookup = true; + + mutex_lock(&pdr->list_lock); + list_for_each_entry(tmp, &pdr->lookups, node) { + if (strcmp(tmp->service_path, service_path)) + continue; + + mutex_unlock(&pdr->list_lock); + ret = -EALREADY; + goto err; + } + + list_add(&pds->node, &pdr->lookups); + mutex_unlock(&pdr->list_lock); + + schedule_work(&pdr->locator_work); + + return pds; +err: + kfree(pds); + return ERR_PTR(ret); +} +EXPORT_SYMBOL(pdr_add_lookup); + +/** + * pdr_restart_pd() - restart PD + * @pdr: PDR client handle + * @pds: PD service handle + * + * Restarts the PD tracked by the PDR client handle for a given service path. + * + * Return: 0 on success, negative errno on failure. + */ +int pdr_restart_pd(struct pdr_handle *pdr, struct pdr_service *pds) +{ + struct servreg_restart_pd_resp resp; + struct servreg_restart_pd_req req = { 0 }; + struct sockaddr_qrtr addr; + struct pdr_service *tmp; + struct qmi_txn txn; + int ret; + + if (IS_ERR_OR_NULL(pdr) || IS_ERR_OR_NULL(pds)) + return -EINVAL; + + mutex_lock(&pdr->list_lock); + list_for_each_entry(tmp, &pdr->lookups, node) { + if (tmp != pds) + continue; + + if (!pds->service_connected) + break; + + /* Prepare req message */ + strscpy(req.service_path, pds->service_path, sizeof(req.service_path)); + addr = pds->addr; + break; + } + mutex_unlock(&pdr->list_lock); + + if (!req.service_path[0]) + return -EINVAL; + + ret = qmi_txn_init(&pdr->notifier_hdl, &txn, + servreg_restart_pd_resp_ei, + &resp); + if (ret < 0) + return ret; + + ret = qmi_send_request(&pdr->notifier_hdl, &addr, + &txn, SERVREG_RESTART_PD_REQ, + SERVREG_RESTART_PD_REQ_MAX_LEN, + servreg_restart_pd_req_ei, &req); + if (ret < 0) { + qmi_txn_cancel(&txn); + return ret; + } + + ret = qmi_txn_wait(&txn, 5 * HZ); + if (ret < 0) { + pr_err("PDR: %s PD restart txn wait failed: %d\n", + req.service_path, ret); + return ret; + } + + /* Check response if PDR is disabled */ + if (resp.resp.result == QMI_RESULT_FAILURE_V01 && + resp.resp.error == QMI_ERR_DISABLED_V01) { + pr_err("PDR: %s PD restart is disabled: 0x%x\n", + req.service_path, resp.resp.error); + return -EOPNOTSUPP; + } + + /* Check the response for other error case*/ + if (resp.resp.result != QMI_RESULT_SUCCESS_V01) { + pr_err("PDR: %s request for PD restart failed: 0x%x\n", + req.service_path, resp.resp.error); + return -EREMOTEIO; + } + + return 0; +} +EXPORT_SYMBOL(pdr_restart_pd); + +/** + * pdr_handle_alloc() - initialize the PDR client handle + * @status: function to be called on PD state change + * @priv: handle for client's use + * + * Initializes the PDR client handle to allow for tracking/restart of PDs. + * + * Return: pdr_handle object on success, ERR_PTR on failure. + */ +struct pdr_handle *pdr_handle_alloc(void (*status)(int state, + char *service_path, + void *priv), void *priv) +{ + struct pdr_handle *pdr; + int ret; + + if (!status) + return ERR_PTR(-EINVAL); + + pdr = kzalloc(sizeof(*pdr), GFP_KERNEL); + if (!pdr) + return ERR_PTR(-ENOMEM); + + pdr->status = status; + pdr->priv = priv; + + mutex_init(&pdr->status_lock); + mutex_init(&pdr->list_lock); + mutex_init(&pdr->lock); + + INIT_LIST_HEAD(&pdr->lookups); + INIT_LIST_HEAD(&pdr->indack_list); + + INIT_WORK(&pdr->locator_work, pdr_locator_work); + INIT_WORK(&pdr->notifier_work, pdr_notifier_work); + INIT_WORK(&pdr->indack_work, pdr_indack_work); + + pdr->notifier_wq = create_singlethread_workqueue("pdr_notifier_wq"); + if (!pdr->notifier_wq) { + ret = -ENOMEM; + goto free_pdr_handle; + } + + pdr->indack_wq = alloc_ordered_workqueue("pdr_indack_wq", WQ_HIGHPRI); + if (!pdr->indack_wq) { + ret = -ENOMEM; + goto destroy_notifier; + } + + ret = qmi_handle_init(&pdr->locator_hdl, + SERVREG_GET_DOMAIN_LIST_RESP_MAX_LEN, + &pdr_locator_ops, NULL); + if (ret < 0) + goto destroy_indack; + + ret = qmi_add_lookup(&pdr->locator_hdl, SERVREG_LOCATOR_SERVICE, 1, 1); + if (ret < 0) + goto release_qmi_handle; + + ret = qmi_handle_init(&pdr->notifier_hdl, + SERVREG_STATE_UPDATED_IND_MAX_LEN, + &pdr_notifier_ops, + qmi_indication_handler); + if (ret < 0) + goto release_qmi_handle; + + return pdr; + +release_qmi_handle: + qmi_handle_release(&pdr->locator_hdl); +destroy_indack: + destroy_workqueue(pdr->indack_wq); +destroy_notifier: + destroy_workqueue(pdr->notifier_wq); +free_pdr_handle: + kfree(pdr); + + return ERR_PTR(ret); +} +EXPORT_SYMBOL(pdr_handle_alloc); + +/** + * pdr_handle_release() - release the PDR client handle + * @pdr: PDR client handle + * + * Cleans up pending tracking requests and releases the underlying qmi handles. + */ +void pdr_handle_release(struct pdr_handle *pdr) +{ + struct pdr_service *pds, *tmp; + + if (IS_ERR_OR_NULL(pdr)) + return; + + mutex_lock(&pdr->list_lock); + list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) { + list_del(&pds->node); + kfree(pds); + } + mutex_unlock(&pdr->list_lock); + + cancel_work_sync(&pdr->locator_work); + cancel_work_sync(&pdr->notifier_work); + cancel_work_sync(&pdr->indack_work); + + destroy_workqueue(pdr->notifier_wq); + destroy_workqueue(pdr->indack_wq); + + qmi_handle_release(&pdr->locator_hdl); + qmi_handle_release(&pdr->notifier_hdl); + + kfree(pdr); +} +EXPORT_SYMBOL(pdr_handle_release); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Qualcomm Protection Domain Restart helpers"); |