diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/s390/block/dasd_diag.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'drivers/s390/block/dasd_diag.c')
-rw-r--r-- | drivers/s390/block/dasd_diag.c | 695 |
1 files changed, 695 insertions, 0 deletions
diff --git a/drivers/s390/block/dasd_diag.c b/drivers/s390/block/dasd_diag.c new file mode 100644 index 000000000..f956a4ac9 --- /dev/null +++ b/drivers/s390/block/dasd_diag.c @@ -0,0 +1,695 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com> + * Based on.......: linux/drivers/s390/block/mdisk.c + * ...............: by Hartmunt Penner <hpenner@de.ibm.com> + * Bugreports.to..: <Linux390@de.ibm.com> + * Copyright IBM Corp. 1999, 2000 + * + */ + +#define KMSG_COMPONENT "dasd" + +#include <linux/kernel_stat.h> +#include <linux/stddef.h> +#include <linux/kernel.h> +#include <linux/slab.h> +#include <linux/hdreg.h> +#include <linux/bio.h> +#include <linux/module.h> +#include <linux/init.h> +#include <linux/jiffies.h> +#include <asm/asm-extable.h> +#include <asm/dasd.h> +#include <asm/debug.h> +#include <asm/diag.h> +#include <asm/ebcdic.h> +#include <asm/io.h> +#include <asm/irq.h> +#include <asm/vtoc.h> + +#include "dasd_int.h" +#include "dasd_diag.h" + +#define PRINTK_HEADER "dasd(diag):" + +MODULE_LICENSE("GPL"); + +/* The maximum number of blocks per request (max_blocks) is dependent on the + * amount of storage that is available in the static I/O buffer for each + * device. Currently each device gets 2 pages. We want to fit two requests + * into the available memory so that we can immediately start the next if one + * finishes. */ +#define DIAG_MAX_BLOCKS (((2 * PAGE_SIZE - sizeof(struct dasd_ccw_req) - \ + sizeof(struct dasd_diag_req)) / \ + sizeof(struct dasd_diag_bio)) / 2) +#define DIAG_MAX_RETRIES 32 +#define DIAG_TIMEOUT 50 + +static struct dasd_discipline dasd_diag_discipline; + +struct dasd_diag_private { + struct dasd_diag_characteristics rdc_data; + struct dasd_diag_rw_io iob; + struct dasd_diag_init_io iib; + blocknum_t pt_block; + struct ccw_dev_id dev_id; +}; + +struct dasd_diag_req { + unsigned int block_count; + struct dasd_diag_bio bio[]; +}; + +static const u8 DASD_DIAG_CMS1[] = { 0xc3, 0xd4, 0xe2, 0xf1 };/* EBCDIC CMS1 */ + +/* Perform DIAG250 call with block I/O parameter list iob (input and output) + * and function code cmd. + * In case of an exception return 3. Otherwise return result of bitwise OR of + * resulting condition code and DIAG return code. */ +static inline int __dia250(void *iob, int cmd) +{ + union register_pair rx = { .even = (unsigned long)iob, }; + typedef union { + struct dasd_diag_init_io init_io; + struct dasd_diag_rw_io rw_io; + } addr_type; + int cc; + + cc = 3; + asm volatile( + " diag %[rx],%[cmd],0x250\n" + "0: ipm %[cc]\n" + " srl %[cc],28\n" + "1:\n" + EX_TABLE(0b,1b) + : [cc] "+&d" (cc), [rx] "+&d" (rx.pair), "+m" (*(addr_type *)iob) + : [cmd] "d" (cmd) + : "cc"); + return cc | rx.odd; +} + +static inline int dia250(void *iob, int cmd) +{ + diag_stat_inc(DIAG_STAT_X250); + return __dia250(iob, cmd); +} + +/* Initialize block I/O to DIAG device using the specified blocksize and + * block offset. On success, return zero and set end_block to contain the + * number of blocks on the device minus the specified offset. Return non-zero + * otherwise. */ +static inline int +mdsk_init_io(struct dasd_device *device, unsigned int blocksize, + blocknum_t offset, blocknum_t *end_block) +{ + struct dasd_diag_private *private = device->private; + struct dasd_diag_init_io *iib = &private->iib; + int rc; + + memset(iib, 0, sizeof (struct dasd_diag_init_io)); + + iib->dev_nr = private->dev_id.devno; + iib->block_size = blocksize; + iib->offset = offset; + iib->flaga = DASD_DIAG_FLAGA_DEFAULT; + + rc = dia250(iib, INIT_BIO); + + if ((rc & 3) == 0 && end_block) + *end_block = iib->end_block; + + return rc; +} + +/* Remove block I/O environment for device. Return zero on success, non-zero + * otherwise. */ +static inline int +mdsk_term_io(struct dasd_device * device) +{ + struct dasd_diag_private *private = device->private; + struct dasd_diag_init_io *iib = &private->iib; + int rc; + + memset(iib, 0, sizeof (struct dasd_diag_init_io)); + iib->dev_nr = private->dev_id.devno; + rc = dia250(iib, TERM_BIO); + return rc; +} + +/* Error recovery for failed DIAG requests - try to reestablish the DIAG + * environment. */ +static void +dasd_diag_erp(struct dasd_device *device) +{ + int rc; + + mdsk_term_io(device); + rc = mdsk_init_io(device, device->block->bp_block, 0, NULL); + if (rc == 4) { + if (!(test_and_set_bit(DASD_FLAG_DEVICE_RO, &device->flags))) + pr_warn("%s: The access mode of a DIAG device changed to read-only\n", + dev_name(&device->cdev->dev)); + rc = 0; + } + if (rc) + pr_warn("%s: DIAG ERP failed with rc=%d\n", + dev_name(&device->cdev->dev), rc); +} + +/* Start a given request at the device. Return zero on success, non-zero + * otherwise. */ +static int +dasd_start_diag(struct dasd_ccw_req * cqr) +{ + struct dasd_device *device; + struct dasd_diag_private *private; + struct dasd_diag_req *dreq; + int rc; + + device = cqr->startdev; + if (cqr->retries < 0) { + DBF_DEV_EVENT(DBF_ERR, device, "DIAG start_IO: request %p " + "- no retry left)", cqr); + cqr->status = DASD_CQR_ERROR; + return -EIO; + } + private = device->private; + dreq = cqr->data; + + private->iob.dev_nr = private->dev_id.devno; + private->iob.key = 0; + private->iob.flags = DASD_DIAG_RWFLAG_ASYNC; + private->iob.block_count = dreq->block_count; + private->iob.interrupt_params = (addr_t) cqr; + private->iob.bio_list = dreq->bio; + private->iob.flaga = DASD_DIAG_FLAGA_DEFAULT; + + cqr->startclk = get_tod_clock(); + cqr->starttime = jiffies; + cqr->retries--; + + rc = dia250(&private->iob, RW_BIO); + switch (rc) { + case 0: /* Synchronous I/O finished successfully */ + cqr->stopclk = get_tod_clock(); + cqr->status = DASD_CQR_SUCCESS; + /* Indicate to calling function that only a dasd_schedule_bh() + and no timer is needed */ + rc = -EACCES; + break; + case 8: /* Asynchronous I/O was started */ + cqr->status = DASD_CQR_IN_IO; + rc = 0; + break; + default: /* Error condition */ + cqr->status = DASD_CQR_QUEUED; + DBF_DEV_EVENT(DBF_WARNING, device, "dia250 returned rc=%d", rc); + dasd_diag_erp(device); + rc = -EIO; + break; + } + cqr->intrc = rc; + return rc; +} + +/* Terminate given request at the device. */ +static int +dasd_diag_term_IO(struct dasd_ccw_req * cqr) +{ + struct dasd_device *device; + + device = cqr->startdev; + mdsk_term_io(device); + mdsk_init_io(device, device->block->bp_block, 0, NULL); + cqr->status = DASD_CQR_CLEAR_PENDING; + cqr->stopclk = get_tod_clock(); + dasd_schedule_device_bh(device); + return 0; +} + +/* Handle external interruption. */ +static void dasd_ext_handler(struct ext_code ext_code, + unsigned int param32, unsigned long param64) +{ + struct dasd_ccw_req *cqr, *next; + struct dasd_device *device; + unsigned long expires; + unsigned long flags; + addr_t ip; + int rc; + + switch (ext_code.subcode >> 8) { + case DASD_DIAG_CODE_31BIT: + ip = (addr_t) param32; + break; + case DASD_DIAG_CODE_64BIT: + ip = (addr_t) param64; + break; + default: + return; + } + inc_irq_stat(IRQEXT_DSD); + if (!ip) { /* no intparm: unsolicited interrupt */ + DBF_EVENT(DBF_NOTICE, "%s", "caught unsolicited " + "interrupt"); + return; + } + cqr = (struct dasd_ccw_req *) ip; + device = (struct dasd_device *) cqr->startdev; + if (strncmp(device->discipline->ebcname, (char *) &cqr->magic, 4)) { + DBF_DEV_EVENT(DBF_WARNING, device, + " magic number of dasd_ccw_req 0x%08X doesn't" + " match discipline 0x%08X", + cqr->magic, *(int *) (&device->discipline->name)); + return; + } + + /* get irq lock to modify request queue */ + spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags); + + /* Check for a pending clear operation */ + if (cqr->status == DASD_CQR_CLEAR_PENDING) { + cqr->status = DASD_CQR_CLEARED; + dasd_device_clear_timer(device); + dasd_schedule_device_bh(device); + spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags); + return; + } + + cqr->stopclk = get_tod_clock(); + + expires = 0; + if ((ext_code.subcode & 0xff) == 0) { + cqr->status = DASD_CQR_SUCCESS; + /* Start first request on queue if possible -> fast_io. */ + if (!list_empty(&device->ccw_queue)) { + next = list_entry(device->ccw_queue.next, + struct dasd_ccw_req, devlist); + if (next->status == DASD_CQR_QUEUED) { + rc = dasd_start_diag(next); + if (rc == 0) + expires = next->expires; + } + } + } else { + cqr->status = DASD_CQR_QUEUED; + DBF_DEV_EVENT(DBF_DEBUG, device, "interrupt status for " + "request %p was %d (%d retries left)", cqr, + ext_code.subcode & 0xff, cqr->retries); + dasd_diag_erp(device); + } + + if (expires != 0) + dasd_device_set_timer(device, expires); + else + dasd_device_clear_timer(device); + dasd_schedule_device_bh(device); + + spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags); +} + +/* Check whether device can be controlled by DIAG discipline. Return zero on + * success, non-zero otherwise. */ +static int +dasd_diag_check_device(struct dasd_device *device) +{ + struct dasd_diag_private *private = device->private; + struct dasd_diag_characteristics *rdc_data; + struct vtoc_cms_label *label; + struct dasd_block *block; + struct dasd_diag_bio *bio; + unsigned int sb, bsize; + blocknum_t end_block; + int rc; + + if (private == NULL) { + private = kzalloc(sizeof(*private), GFP_KERNEL); + if (private == NULL) { + DBF_DEV_EVENT(DBF_WARNING, device, "%s", + "Allocating memory for private DASD data " + "failed\n"); + return -ENOMEM; + } + ccw_device_get_id(device->cdev, &private->dev_id); + device->private = private; + } + block = dasd_alloc_block(); + if (IS_ERR(block)) { + DBF_DEV_EVENT(DBF_WARNING, device, "%s", + "could not allocate dasd block structure"); + device->private = NULL; + kfree(private); + return PTR_ERR(block); + } + device->block = block; + block->base = device; + + /* Read Device Characteristics */ + rdc_data = &private->rdc_data; + rdc_data->dev_nr = private->dev_id.devno; + rdc_data->rdc_len = sizeof (struct dasd_diag_characteristics); + + rc = diag210((struct diag210 *) rdc_data); + if (rc) { + DBF_DEV_EVENT(DBF_WARNING, device, "failed to retrieve device " + "information (rc=%d)", rc); + rc = -EOPNOTSUPP; + goto out; + } + + device->default_expires = DIAG_TIMEOUT; + device->default_retries = DIAG_MAX_RETRIES; + + /* Figure out position of label block */ + switch (private->rdc_data.vdev_class) { + case DEV_CLASS_FBA: + private->pt_block = 1; + break; + case DEV_CLASS_ECKD: + private->pt_block = 2; + break; + default: + pr_warn("%s: Device type %d is not supported in DIAG mode\n", + dev_name(&device->cdev->dev), + private->rdc_data.vdev_class); + rc = -EOPNOTSUPP; + goto out; + } + + DBF_DEV_EVENT(DBF_INFO, device, + "%04X: %04X on real %04X/%02X", + rdc_data->dev_nr, + rdc_data->vdev_type, + rdc_data->rdev_type, rdc_data->rdev_model); + + /* terminate all outstanding operations */ + mdsk_term_io(device); + + /* figure out blocksize of device */ + label = (struct vtoc_cms_label *) get_zeroed_page(GFP_KERNEL); + if (label == NULL) { + DBF_DEV_EVENT(DBF_WARNING, device, "%s", + "No memory to allocate initialization request"); + rc = -ENOMEM; + goto out; + } + bio = kzalloc(sizeof(*bio), GFP_KERNEL); + if (bio == NULL) { + DBF_DEV_EVENT(DBF_WARNING, device, "%s", + "No memory to allocate initialization bio"); + rc = -ENOMEM; + goto out_label; + } + rc = 0; + end_block = 0; + /* try all sizes - needed for ECKD devices */ + for (bsize = 512; bsize <= PAGE_SIZE; bsize <<= 1) { + mdsk_init_io(device, bsize, 0, &end_block); + memset(bio, 0, sizeof(*bio)); + bio->type = MDSK_READ_REQ; + bio->block_number = private->pt_block + 1; + bio->buffer = label; + memset(&private->iob, 0, sizeof (struct dasd_diag_rw_io)); + private->iob.dev_nr = rdc_data->dev_nr; + private->iob.key = 0; + private->iob.flags = 0; /* do synchronous io */ + private->iob.block_count = 1; + private->iob.interrupt_params = 0; + private->iob.bio_list = bio; + private->iob.flaga = DASD_DIAG_FLAGA_DEFAULT; + rc = dia250(&private->iob, RW_BIO); + if (rc == 3) { + pr_warn("%s: A 64-bit DIAG call failed\n", + dev_name(&device->cdev->dev)); + rc = -EOPNOTSUPP; + goto out_bio; + } + mdsk_term_io(device); + if (rc == 0) + break; + } + if (bsize > PAGE_SIZE) { + pr_warn("%s: Accessing the DASD failed because of an incorrect format (rc=%d)\n", + dev_name(&device->cdev->dev), rc); + rc = -EIO; + goto out_bio; + } + /* check for label block */ + if (memcmp(label->label_id, DASD_DIAG_CMS1, + sizeof(DASD_DIAG_CMS1)) == 0) { + /* get formatted blocksize from label block */ + bsize = (unsigned int) label->block_size; + block->blocks = (unsigned long) label->block_count; + } else + block->blocks = end_block; + block->bp_block = bsize; + block->s2b_shift = 0; /* bits to shift 512 to get a block */ + for (sb = 512; sb < bsize; sb = sb << 1) + block->s2b_shift++; + rc = mdsk_init_io(device, block->bp_block, 0, NULL); + if (rc && (rc != 4)) { + pr_warn("%s: DIAG initialization failed with rc=%d\n", + dev_name(&device->cdev->dev), rc); + rc = -EIO; + } else { + if (rc == 4) + set_bit(DASD_FLAG_DEVICE_RO, &device->flags); + pr_info("%s: New DASD with %ld byte/block, total size %ld " + "KB%s\n", dev_name(&device->cdev->dev), + (unsigned long) block->bp_block, + (unsigned long) (block->blocks << + block->s2b_shift) >> 1, + (rc == 4) ? ", read-only device" : ""); + rc = 0; + } +out_bio: + kfree(bio); +out_label: + free_page((long) label); +out: + if (rc) { + device->block = NULL; + dasd_free_block(block); + device->private = NULL; + kfree(private); + } + return rc; +} + +/* Fill in virtual disk geometry for device. Return zero on success, non-zero + * otherwise. */ +static int +dasd_diag_fill_geometry(struct dasd_block *block, struct hd_geometry *geo) +{ + if (dasd_check_blocksize(block->bp_block) != 0) + return -EINVAL; + geo->cylinders = (block->blocks << block->s2b_shift) >> 10; + geo->heads = 16; + geo->sectors = 128 >> block->s2b_shift; + return 0; +} + +static dasd_erp_fn_t +dasd_diag_erp_action(struct dasd_ccw_req * cqr) +{ + return dasd_default_erp_action; +} + +static dasd_erp_fn_t +dasd_diag_erp_postaction(struct dasd_ccw_req * cqr) +{ + return dasd_default_erp_postaction; +} + +/* Create DASD request from block device request. Return pointer to new + * request on success, ERR_PTR otherwise. */ +static struct dasd_ccw_req *dasd_diag_build_cp(struct dasd_device *memdev, + struct dasd_block *block, + struct request *req) +{ + struct dasd_ccw_req *cqr; + struct dasd_diag_req *dreq; + struct dasd_diag_bio *dbio; + struct req_iterator iter; + struct bio_vec bv; + char *dst; + unsigned int count; + sector_t recid, first_rec, last_rec; + unsigned int blksize, off; + unsigned char rw_cmd; + + if (rq_data_dir(req) == READ) + rw_cmd = MDSK_READ_REQ; + else if (rq_data_dir(req) == WRITE) + rw_cmd = MDSK_WRITE_REQ; + else + return ERR_PTR(-EINVAL); + blksize = block->bp_block; + /* Calculate record id of first and last block. */ + first_rec = blk_rq_pos(req) >> block->s2b_shift; + last_rec = + (blk_rq_pos(req) + blk_rq_sectors(req) - 1) >> block->s2b_shift; + /* Check struct bio and count the number of blocks for the request. */ + count = 0; + rq_for_each_segment(bv, req, iter) { + if (bv.bv_len & (blksize - 1)) + /* Fba can only do full blocks. */ + return ERR_PTR(-EINVAL); + count += bv.bv_len >> (block->s2b_shift + 9); + } + /* Paranoia. */ + if (count != last_rec - first_rec + 1) + return ERR_PTR(-EINVAL); + /* Build the request */ + cqr = dasd_smalloc_request(DASD_DIAG_MAGIC, 0, struct_size(dreq, bio, count), + memdev, blk_mq_rq_to_pdu(req)); + if (IS_ERR(cqr)) + return cqr; + + dreq = (struct dasd_diag_req *) cqr->data; + dreq->block_count = count; + dbio = dreq->bio; + recid = first_rec; + rq_for_each_segment(bv, req, iter) { + dst = bvec_virt(&bv); + for (off = 0; off < bv.bv_len; off += blksize) { + memset(dbio, 0, sizeof (struct dasd_diag_bio)); + dbio->type = rw_cmd; + dbio->block_number = recid + 1; + dbio->buffer = dst; + dbio++; + dst += blksize; + recid++; + } + } + cqr->retries = memdev->default_retries; + cqr->buildclk = get_tod_clock(); + if (blk_noretry_request(req) || + block->base->features & DASD_FEATURE_FAILFAST) + set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags); + cqr->startdev = memdev; + cqr->memdev = memdev; + cqr->block = block; + cqr->expires = memdev->default_expires * HZ; + cqr->status = DASD_CQR_FILLED; + return cqr; +} + +/* Release DASD request. Return non-zero if request was successful, zero + * otherwise. */ +static int +dasd_diag_free_cp(struct dasd_ccw_req *cqr, struct request *req) +{ + int status; + + status = cqr->status == DASD_CQR_DONE; + dasd_sfree_request(cqr, cqr->memdev); + return status; +} + +static void dasd_diag_handle_terminated_request(struct dasd_ccw_req *cqr) +{ + if (cqr->retries < 0) + cqr->status = DASD_CQR_FAILED; + else + cqr->status = DASD_CQR_FILLED; +}; + +/* Fill in IOCTL data for device. */ +static int +dasd_diag_fill_info(struct dasd_device * device, + struct dasd_information2_t * info) +{ + struct dasd_diag_private *private = device->private; + + info->label_block = (unsigned int) private->pt_block; + info->FBA_layout = 1; + info->format = DASD_FORMAT_LDL; + info->characteristics_size = sizeof(private->rdc_data); + memcpy(info->characteristics, &private->rdc_data, + sizeof(private->rdc_data)); + info->confdata_size = 0; + return 0; +} + +static void +dasd_diag_dump_sense(struct dasd_device *device, struct dasd_ccw_req * req, + struct irb *stat) +{ + DBF_DEV_EVENT(DBF_WARNING, device, "%s", + "dump sense not available for DIAG data"); +} + +/* + * Initialize block layer request queue. + */ +static void dasd_diag_setup_blk_queue(struct dasd_block *block) +{ + unsigned int logical_block_size = block->bp_block; + struct request_queue *q = block->gdp->queue; + int max; + + max = DIAG_MAX_BLOCKS << block->s2b_shift; + blk_queue_flag_set(QUEUE_FLAG_NONROT, q); + q->limits.max_dev_sectors = max; + blk_queue_logical_block_size(q, logical_block_size); + blk_queue_max_hw_sectors(q, max); + blk_queue_max_segments(q, USHRT_MAX); + /* With page sized segments each segment can be translated into one idaw/tidaw */ + blk_queue_max_segment_size(q, PAGE_SIZE); + blk_queue_segment_boundary(q, PAGE_SIZE - 1); + blk_queue_dma_alignment(q, PAGE_SIZE - 1); +} + +static int dasd_diag_pe_handler(struct dasd_device *device, + __u8 tbvpm, __u8 fcsecpm) +{ + return dasd_generic_verify_path(device, tbvpm); +} + +static struct dasd_discipline dasd_diag_discipline = { + .owner = THIS_MODULE, + .name = "DIAG", + .ebcname = "DIAG", + .check_device = dasd_diag_check_device, + .pe_handler = dasd_diag_pe_handler, + .fill_geometry = dasd_diag_fill_geometry, + .setup_blk_queue = dasd_diag_setup_blk_queue, + .start_IO = dasd_start_diag, + .term_IO = dasd_diag_term_IO, + .handle_terminated_request = dasd_diag_handle_terminated_request, + .erp_action = dasd_diag_erp_action, + .erp_postaction = dasd_diag_erp_postaction, + .build_cp = dasd_diag_build_cp, + .free_cp = dasd_diag_free_cp, + .dump_sense = dasd_diag_dump_sense, + .fill_info = dasd_diag_fill_info, +}; + +static int __init +dasd_diag_init(void) +{ + if (!MACHINE_IS_VM) { + pr_info("Discipline %s cannot be used without z/VM\n", + dasd_diag_discipline.name); + return -ENODEV; + } + ASCEBC(dasd_diag_discipline.ebcname, 4); + + irq_subclass_register(IRQ_SUBCLASS_SERVICE_SIGNAL); + register_external_irq(EXT_IRQ_CP_SERVICE, dasd_ext_handler); + dasd_diag_discipline_pointer = &dasd_diag_discipline; + return 0; +} + +static void __exit +dasd_diag_cleanup(void) +{ + unregister_external_irq(EXT_IRQ_CP_SERVICE, dasd_ext_handler); + irq_subclass_unregister(IRQ_SUBCLASS_SERVICE_SIGNAL); + dasd_diag_discipline_pointer = NULL; +} + +module_init(dasd_diag_init); +module_exit(dasd_diag_cleanup); |