diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /crypto/xts.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to '')
-rw-r--r-- | crypto/xts.c | 469 |
1 files changed, 469 insertions, 0 deletions
diff --git a/crypto/xts.c b/crypto/xts.c new file mode 100644 index 000000000..09be909a6 --- /dev/null +++ b/crypto/xts.c @@ -0,0 +1,469 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* XTS: as defined in IEEE1619/D16 + * http://grouper.ieee.org/groups/1619/email/pdf00086.pdf + * + * Copyright (c) 2007 Rik Snel <rsnel@cube.dyndns.org> + * + * Based on ecb.c + * Copyright (c) 2006 Herbert Xu <herbert@gondor.apana.org.au> + */ +#include <crypto/internal/cipher.h> +#include <crypto/internal/skcipher.h> +#include <crypto/scatterwalk.h> +#include <linux/err.h> +#include <linux/init.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> + +#include <crypto/xts.h> +#include <crypto/b128ops.h> +#include <crypto/gf128mul.h> + +struct xts_tfm_ctx { + struct crypto_skcipher *child; + struct crypto_cipher *tweak; +}; + +struct xts_instance_ctx { + struct crypto_skcipher_spawn spawn; + char name[CRYPTO_MAX_ALG_NAME]; +}; + +struct xts_request_ctx { + le128 t; + struct scatterlist *tail; + struct scatterlist sg[2]; + struct skcipher_request subreq; +}; + +static int xts_setkey(struct crypto_skcipher *parent, const u8 *key, + unsigned int keylen) +{ + struct xts_tfm_ctx *ctx = crypto_skcipher_ctx(parent); + struct crypto_skcipher *child; + struct crypto_cipher *tweak; + int err; + + err = xts_verify_key(parent, key, keylen); + if (err) + return err; + + keylen /= 2; + + /* we need two cipher instances: one to compute the initial 'tweak' + * by encrypting the IV (usually the 'plain' iv) and the other + * one to encrypt and decrypt the data */ + + /* tweak cipher, uses Key2 i.e. the second half of *key */ + tweak = ctx->tweak; + crypto_cipher_clear_flags(tweak, CRYPTO_TFM_REQ_MASK); + crypto_cipher_set_flags(tweak, crypto_skcipher_get_flags(parent) & + CRYPTO_TFM_REQ_MASK); + err = crypto_cipher_setkey(tweak, key + keylen, keylen); + if (err) + return err; + + /* data cipher, uses Key1 i.e. the first half of *key */ + child = ctx->child; + crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(child, crypto_skcipher_get_flags(parent) & + CRYPTO_TFM_REQ_MASK); + return crypto_skcipher_setkey(child, key, keylen); +} + +/* + * We compute the tweak masks twice (both before and after the ECB encryption or + * decryption) to avoid having to allocate a temporary buffer and/or make + * mutliple calls to the 'ecb(..)' instance, which usually would be slower than + * just doing the gf128mul_x_ble() calls again. + */ +static int xts_xor_tweak(struct skcipher_request *req, bool second_pass, + bool enc) +{ + struct xts_request_ctx *rctx = skcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const bool cts = (req->cryptlen % XTS_BLOCK_SIZE); + const int bs = XTS_BLOCK_SIZE; + struct skcipher_walk w; + le128 t = rctx->t; + int err; + + if (second_pass) { + req = &rctx->subreq; + /* set to our TFM to enforce correct alignment: */ + skcipher_request_set_tfm(req, tfm); + } + err = skcipher_walk_virt(&w, req, false); + + while (w.nbytes) { + unsigned int avail = w.nbytes; + le128 *wsrc; + le128 *wdst; + + wsrc = w.src.virt.addr; + wdst = w.dst.virt.addr; + + do { + if (unlikely(cts) && + w.total - w.nbytes + avail < 2 * XTS_BLOCK_SIZE) { + if (!enc) { + if (second_pass) + rctx->t = t; + gf128mul_x_ble(&t, &t); + } + le128_xor(wdst, &t, wsrc); + if (enc && second_pass) + gf128mul_x_ble(&rctx->t, &t); + skcipher_walk_done(&w, avail - bs); + return 0; + } + + le128_xor(wdst++, &t, wsrc++); + gf128mul_x_ble(&t, &t); + } while ((avail -= bs) >= bs); + + err = skcipher_walk_done(&w, avail); + } + + return err; +} + +static int xts_xor_tweak_pre(struct skcipher_request *req, bool enc) +{ + return xts_xor_tweak(req, false, enc); +} + +static int xts_xor_tweak_post(struct skcipher_request *req, bool enc) +{ + return xts_xor_tweak(req, true, enc); +} + +static void xts_cts_done(void *data, int err) +{ + struct skcipher_request *req = data; + le128 b; + + if (!err) { + struct xts_request_ctx *rctx = skcipher_request_ctx(req); + + scatterwalk_map_and_copy(&b, rctx->tail, 0, XTS_BLOCK_SIZE, 0); + le128_xor(&b, &rctx->t, &b); + scatterwalk_map_and_copy(&b, rctx->tail, 0, XTS_BLOCK_SIZE, 1); + } + + skcipher_request_complete(req, err); +} + +static int xts_cts_final(struct skcipher_request *req, + int (*crypt)(struct skcipher_request *req)) +{ + const struct xts_tfm_ctx *ctx = + crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + int offset = req->cryptlen & ~(XTS_BLOCK_SIZE - 1); + struct xts_request_ctx *rctx = skcipher_request_ctx(req); + struct skcipher_request *subreq = &rctx->subreq; + int tail = req->cryptlen % XTS_BLOCK_SIZE; + le128 b[2]; + int err; + + rctx->tail = scatterwalk_ffwd(rctx->sg, req->dst, + offset - XTS_BLOCK_SIZE); + + scatterwalk_map_and_copy(b, rctx->tail, 0, XTS_BLOCK_SIZE, 0); + b[1] = b[0]; + scatterwalk_map_and_copy(b, req->src, offset, tail, 0); + + le128_xor(b, &rctx->t, b); + + scatterwalk_map_and_copy(b, rctx->tail, 0, XTS_BLOCK_SIZE + tail, 1); + + skcipher_request_set_tfm(subreq, ctx->child); + skcipher_request_set_callback(subreq, req->base.flags, xts_cts_done, + req); + skcipher_request_set_crypt(subreq, rctx->tail, rctx->tail, + XTS_BLOCK_SIZE, NULL); + + err = crypt(subreq); + if (err) + return err; + + scatterwalk_map_and_copy(b, rctx->tail, 0, XTS_BLOCK_SIZE, 0); + le128_xor(b, &rctx->t, b); + scatterwalk_map_and_copy(b, rctx->tail, 0, XTS_BLOCK_SIZE, 1); + + return 0; +} + +static void xts_encrypt_done(void *data, int err) +{ + struct skcipher_request *req = data; + + if (!err) { + struct xts_request_ctx *rctx = skcipher_request_ctx(req); + + rctx->subreq.base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG; + err = xts_xor_tweak_post(req, true); + + if (!err && unlikely(req->cryptlen % XTS_BLOCK_SIZE)) { + err = xts_cts_final(req, crypto_skcipher_encrypt); + if (err == -EINPROGRESS || err == -EBUSY) + return; + } + } + + skcipher_request_complete(req, err); +} + +static void xts_decrypt_done(void *data, int err) +{ + struct skcipher_request *req = data; + + if (!err) { + struct xts_request_ctx *rctx = skcipher_request_ctx(req); + + rctx->subreq.base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG; + err = xts_xor_tweak_post(req, false); + + if (!err && unlikely(req->cryptlen % XTS_BLOCK_SIZE)) { + err = xts_cts_final(req, crypto_skcipher_decrypt); + if (err == -EINPROGRESS || err == -EBUSY) + return; + } + } + + skcipher_request_complete(req, err); +} + +static int xts_init_crypt(struct skcipher_request *req, + crypto_completion_t compl) +{ + const struct xts_tfm_ctx *ctx = + crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + struct xts_request_ctx *rctx = skcipher_request_ctx(req); + struct skcipher_request *subreq = &rctx->subreq; + + if (req->cryptlen < XTS_BLOCK_SIZE) + return -EINVAL; + + skcipher_request_set_tfm(subreq, ctx->child); + skcipher_request_set_callback(subreq, req->base.flags, compl, req); + skcipher_request_set_crypt(subreq, req->dst, req->dst, + req->cryptlen & ~(XTS_BLOCK_SIZE - 1), NULL); + + /* calculate first value of T */ + crypto_cipher_encrypt_one(ctx->tweak, (u8 *)&rctx->t, req->iv); + + return 0; +} + +static int xts_encrypt(struct skcipher_request *req) +{ + struct xts_request_ctx *rctx = skcipher_request_ctx(req); + struct skcipher_request *subreq = &rctx->subreq; + int err; + + err = xts_init_crypt(req, xts_encrypt_done) ?: + xts_xor_tweak_pre(req, true) ?: + crypto_skcipher_encrypt(subreq) ?: + xts_xor_tweak_post(req, true); + + if (err || likely((req->cryptlen % XTS_BLOCK_SIZE) == 0)) + return err; + + return xts_cts_final(req, crypto_skcipher_encrypt); +} + +static int xts_decrypt(struct skcipher_request *req) +{ + struct xts_request_ctx *rctx = skcipher_request_ctx(req); + struct skcipher_request *subreq = &rctx->subreq; + int err; + + err = xts_init_crypt(req, xts_decrypt_done) ?: + xts_xor_tweak_pre(req, false) ?: + crypto_skcipher_decrypt(subreq) ?: + xts_xor_tweak_post(req, false); + + if (err || likely((req->cryptlen % XTS_BLOCK_SIZE) == 0)) + return err; + + return xts_cts_final(req, crypto_skcipher_decrypt); +} + +static int xts_init_tfm(struct crypto_skcipher *tfm) +{ + struct skcipher_instance *inst = skcipher_alg_instance(tfm); + struct xts_instance_ctx *ictx = skcipher_instance_ctx(inst); + struct xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); + struct crypto_skcipher *child; + struct crypto_cipher *tweak; + + child = crypto_spawn_skcipher(&ictx->spawn); + if (IS_ERR(child)) + return PTR_ERR(child); + + ctx->child = child; + + tweak = crypto_alloc_cipher(ictx->name, 0, 0); + if (IS_ERR(tweak)) { + crypto_free_skcipher(ctx->child); + return PTR_ERR(tweak); + } + + ctx->tweak = tweak; + + crypto_skcipher_set_reqsize(tfm, crypto_skcipher_reqsize(child) + + sizeof(struct xts_request_ctx)); + + return 0; +} + +static void xts_exit_tfm(struct crypto_skcipher *tfm) +{ + struct xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); + + crypto_free_skcipher(ctx->child); + crypto_free_cipher(ctx->tweak); +} + +static void xts_free_instance(struct skcipher_instance *inst) +{ + struct xts_instance_ctx *ictx = skcipher_instance_ctx(inst); + + crypto_drop_skcipher(&ictx->spawn); + kfree(inst); +} + +static int xts_create(struct crypto_template *tmpl, struct rtattr **tb) +{ + struct skcipher_instance *inst; + struct xts_instance_ctx *ctx; + struct skcipher_alg *alg; + const char *cipher_name; + u32 mask; + int err; + + err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER, &mask); + if (err) + return err; + + cipher_name = crypto_attr_alg_name(tb[1]); + if (IS_ERR(cipher_name)) + return PTR_ERR(cipher_name); + + inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL); + if (!inst) + return -ENOMEM; + + ctx = skcipher_instance_ctx(inst); + + err = crypto_grab_skcipher(&ctx->spawn, skcipher_crypto_instance(inst), + cipher_name, 0, mask); + if (err == -ENOENT) { + err = -ENAMETOOLONG; + if (snprintf(ctx->name, CRYPTO_MAX_ALG_NAME, "ecb(%s)", + cipher_name) >= CRYPTO_MAX_ALG_NAME) + goto err_free_inst; + + err = crypto_grab_skcipher(&ctx->spawn, + skcipher_crypto_instance(inst), + ctx->name, 0, mask); + } + + if (err) + goto err_free_inst; + + alg = crypto_skcipher_spawn_alg(&ctx->spawn); + + err = -EINVAL; + if (alg->base.cra_blocksize != XTS_BLOCK_SIZE) + goto err_free_inst; + + if (crypto_skcipher_alg_ivsize(alg)) + goto err_free_inst; + + err = crypto_inst_setname(skcipher_crypto_instance(inst), "xts", + &alg->base); + if (err) + goto err_free_inst; + + err = -EINVAL; + cipher_name = alg->base.cra_name; + + /* Alas we screwed up the naming so we have to mangle the + * cipher name. + */ + if (!strncmp(cipher_name, "ecb(", 4)) { + unsigned len; + + len = strlcpy(ctx->name, cipher_name + 4, sizeof(ctx->name)); + if (len < 2 || len >= sizeof(ctx->name)) + goto err_free_inst; + + if (ctx->name[len - 1] != ')') + goto err_free_inst; + + ctx->name[len - 1] = 0; + + if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, + "xts(%s)", ctx->name) >= CRYPTO_MAX_ALG_NAME) { + err = -ENAMETOOLONG; + goto err_free_inst; + } + } else + goto err_free_inst; + + inst->alg.base.cra_priority = alg->base.cra_priority; + inst->alg.base.cra_blocksize = XTS_BLOCK_SIZE; + inst->alg.base.cra_alignmask = alg->base.cra_alignmask | + (__alignof__(u64) - 1); + + inst->alg.ivsize = XTS_BLOCK_SIZE; + inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) * 2; + inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg) * 2; + + inst->alg.base.cra_ctxsize = sizeof(struct xts_tfm_ctx); + + inst->alg.init = xts_init_tfm; + inst->alg.exit = xts_exit_tfm; + + inst->alg.setkey = xts_setkey; + inst->alg.encrypt = xts_encrypt; + inst->alg.decrypt = xts_decrypt; + + inst->free = xts_free_instance; + + err = skcipher_register_instance(tmpl, inst); + if (err) { +err_free_inst: + xts_free_instance(inst); + } + return err; +} + +static struct crypto_template xts_tmpl = { + .name = "xts", + .create = xts_create, + .module = THIS_MODULE, +}; + +static int __init xts_module_init(void) +{ + return crypto_register_template(&xts_tmpl); +} + +static void __exit xts_module_exit(void) +{ + crypto_unregister_template(&xts_tmpl); +} + +subsys_initcall(xts_module_init); +module_exit(xts_module_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("XTS block cipher mode"); +MODULE_ALIAS_CRYPTO("xts"); +MODULE_IMPORT_NS(CRYPTO_INTERNAL); +MODULE_SOFTDEP("pre: ecb"); |