diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-13 15:47:48 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-13 15:47:48 -0800 |
commit | 7e68dd7d07a28faa2e6574dd6b9dbd90cdeaae91 (patch) | |
tree | ae0427c5a3b905f24b3a44b510a9bcf35d9b67a3 /drivers/net/ethernet/netronome | |
parent | 1ca06f1c1acecbe02124f14a37cce347b8c1a90c (diff) | |
parent | 7c4a6309e27f411743817fe74a832ec2d2798a4b (diff) | |
download | linux-next-7e68dd7d07a28faa2e6574dd6b9dbd90cdeaae91.tar.gz |
Merge tag 'net-next-6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Paolo Abeni:
"Core:
- Allow live renaming when an interface is up
- Add retpoline wrappers for tc, improving considerably the
performances of complex queue discipline configurations
- Add inet drop monitor support
- A few GRO performance improvements
- Add infrastructure for atomic dev stats, addressing long standing
data races
- De-duplicate common code between OVS and conntrack offloading
infrastructure
- A bunch of UBSAN_BOUNDS/FORTIFY_SOURCE improvements
- Netfilter: introduce packet parser for tunneled packets
- Replace IPVS timer-based estimators with kthreads to scale up the
workload with the number of available CPUs
- Add the helper support for connection-tracking OVS offload
BPF:
- Support for user defined BPF objects: the use case is to allocate
own objects, build own object hierarchies and use the building
blocks to build own data structures flexibly, for example, linked
lists in BPF
- Make cgroup local storage available to non-cgroup attached BPF
programs
- Avoid unnecessary deadlock detection and failures wrt BPF task
storage helpers
- A relevant bunch of BPF verifier fixes and improvements
- Veristat tool improvements to support custom filtering, sorting,
and replay of results
- Add LLVM disassembler as default library for dumping JITed code
- Lots of new BPF documentation for various BPF maps
- Add bpf_rcu_read_{,un}lock() support for sleepable programs
- Add RCU grace period chaining to BPF to wait for the completion of
access from both sleepable and non-sleepable BPF programs
- Add support storing struct task_struct objects as kptrs in maps
- Improve helper UAPI by explicitly defining BPF_FUNC_xxx integer
values
- Add libbpf *_opts API-variants for bpf_*_get_fd_by_id() functions
Protocols:
- TCP: implement Protective Load Balancing across switch links
- TCP: allow dynamically disabling TCP-MD5 static key, reverting back
to fast[er]-path
- UDP: Introduce optional per-netns hash lookup table
- IPv6: simplify and cleanup sockets disposal
- Netlink: support different type policies for each generic netlink
operation
- MPTCP: add MSG_FASTOPEN and FastOpen listener side support
- MPTCP: add netlink notification support for listener sockets events
- SCTP: add VRF support, allowing sctp sockets binding to VRF devices
- Add bridging MAC Authentication Bypass (MAB) support
- Extensions for Ethernet VPN bridging implementation to better
support multicast scenarios
- More work for Wi-Fi 7 support, comprising conversion of all the
existing drivers to internal TX queue usage
- IPSec: introduce a new offload type (packet offload) allowing
complete header processing and crypto offloading
- IPSec: extended ack support for more descriptive XFRM error
reporting
- RXRPC: increase SACK table size and move processing into a
per-local endpoint kernel thread, reducing considerably the
required locking
- IEEE 802154: synchronous send frame and extended filtering support,
initial support for scanning available 15.4 networks
- Tun: bump the link speed from 10Mbps to 10Gbps
- Tun/VirtioNet: implement UDP segmentation offload support
Driver API:
- PHY/SFP: improve power level switching between standard level 1 and
the higher power levels
- New API for netdev <-> devlink_port linkage
- PTP: convert existing drivers to new frequency adjustment
implementation
- DSA: add support for rx offloading
- Autoload DSA tagging driver when dynamically changing protocol
- Add new PCP and APPTRUST attributes to Data Center Bridging
- Add configuration support for 800Gbps link speed
- Add devlink port function attribute to enable/disable RoCE and
migratable
- Extend devlink-rate to support strict prioriry and weighted fair
queuing
- Add devlink support to directly reading from region memory
- New device tree helper to fetch MAC address from nvmem
- New big TCP helper to simplify temporary header stripping
New hardware / drivers:
- Ethernet:
- Marvel Octeon CNF95N and CN10KB Ethernet Switches
- Marvel Prestera AC5X Ethernet Switch
- WangXun 10 Gigabit NIC
- Motorcomm yt8521 Gigabit Ethernet
- Microchip ksz9563 Gigabit Ethernet Switch
- Microsoft Azure Network Adapter
- Linux Automation 10Base-T1L adapter
- PHY:
- Aquantia AQR112 and AQR412
- Motorcomm YT8531S
- PTP:
- Orolia ART-CARD
- WiFi:
- MediaTek Wi-Fi 7 (802.11be) devices
- RealTek rtw8821cu, rtw8822bu, rtw8822cu and rtw8723du USB
devices
- Bluetooth:
- Broadcom BCM4377/4378/4387 Bluetooth chipsets
- Realtek RTL8852BE and RTL8723DS
- Cypress.CYW4373A0 WiFi + Bluetooth combo device
Drivers:
- CAN:
- gs_usb: bus error reporting support
- kvaser_usb: listen only and bus error reporting support
- Ethernet NICs:
- Intel (100G):
- extend action skbedit to RX queue mapping
- implement devlink-rate support
- support direct read from memory
- nVidia/Mellanox (mlx5):
- SW steering improvements, increasing rules update rate
- Support for enhanced events compression
- extend H/W offload packet manipulation capabilities
- implement IPSec packet offload mode
- nVidia/Mellanox (mlx4):
- better big TCP support
- Netronome Ethernet NICs (nfp):
- IPsec offload support
- add support for multicast filter
- Broadcom:
- RSS and PTP support improvements
- AMD/SolarFlare:
- netlink extened ack improvements
- add basic flower matches to offload, and related stats
- Virtual NICs:
- ibmvnic: introduce affinity hint support
- small / embedded:
- FreeScale fec: add initial XDP support
- Marvel mv643xx_eth: support MII/GMII/RGMII modes for Kirkwood
- TI am65-cpsw: add suspend/resume support
- Mediatek MT7986: add RX wireless wthernet dispatch support
- Realtek 8169: enable GRO software interrupt coalescing per
default
- Ethernet high-speed switches:
- Microchip (sparx5):
- add support for Sparx5 TC/flower H/W offload via VCAP
- Mellanox mlxsw:
- add 802.1X and MAC Authentication Bypass offload support
- add ip6gre support
- Embedded Ethernet switches:
- Mediatek (mtk_eth_soc):
- improve PCS implementation, add DSA untag support
- enable flow offload support
- Renesas:
- add rswitch R-Car Gen4 gPTP support
- Microchip (lan966x):
- add full XDP support
- add TC H/W offload via VCAP
- enable PTP on bridge interfaces
- Microchip (ksz8):
- add MTU support for KSZ8 series
- Qualcomm 802.11ax WiFi (ath11k):
- support configuring channel dwell time during scan
- MediaTek WiFi (mt76):
- enable Wireless Ethernet Dispatch (WED) offload support
- add ack signal support
- enable coredump support
- remain_on_channel support
- Intel WiFi (iwlwifi):
- enable Wi-Fi 7 Extremely High Throughput (EHT) PHY capabilities
- 320 MHz channels support
- RealTek WiFi (rtw89):
- new dynamic header firmware format support
- wake-over-WLAN support"
* tag 'net-next-6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2002 commits)
ipvs: fix type warning in do_div() on 32 bit
net: lan966x: Remove a useless test in lan966x_ptp_add_trap()
net: ipa: add IPA v4.7 support
dt-bindings: net: qcom,ipa: Add SM6350 compatible
bnxt: Use generic HBH removal helper in tx path
IPv6/GRO: generic helper to remove temporary HBH/jumbo header in driver
selftests: forwarding: Add bridge MDB test
selftests: forwarding: Rename bridge_mdb test
bridge: mcast: Support replacement of MDB port group entries
bridge: mcast: Allow user space to specify MDB entry routing protocol
bridge: mcast: Allow user space to add (*, G) with a source list and filter mode
bridge: mcast: Add support for (*, G) with a source list and filter mode
bridge: mcast: Avoid arming group timer when (S, G) corresponds to a source
bridge: mcast: Add a flag for user installed source entries
bridge: mcast: Expose __br_multicast_del_group_src()
bridge: mcast: Expose br_multicast_new_group_src()
bridge: mcast: Add a centralized error path
bridge: mcast: Place netlink policy before validation functions
bridge: mcast: Split (*, G) and (S, G) addition into different functions
bridge: mcast: Do not derive entry type from its filter mode
...
Diffstat (limited to 'drivers/net/ethernet/netronome')
25 files changed, 1124 insertions, 97 deletions
diff --git a/drivers/net/ethernet/netronome/Kconfig b/drivers/net/ethernet/netronome/Kconfig index 8844d1ac053a..e785c00b5845 100644 --- a/drivers/net/ethernet/netronome/Kconfig +++ b/drivers/net/ethernet/netronome/Kconfig @@ -54,6 +54,17 @@ config NFP_APP_ABM_NIC functionality. Code will be built into the nfp.ko driver. +config NFP_NET_IPSEC + bool "NFP IPsec crypto offload support" + depends on NFP + depends on XFRM_OFFLOAD + default y + help + Enable driver support IPsec crypto offload on NFP NIC. + Say Y, if you are planning to make use of IPsec crypto + offload. NOTE that IPsec crypto offload on NFP NIC + requires specific FW to work. + config NFP_DEBUG bool "Debug support for Netronome(R) NFP4000/NFP6000 NIC drivers" depends on NFP diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile index 9c0861d03634..8a250214e289 100644 --- a/drivers/net/ethernet/netronome/nfp/Makefile +++ b/drivers/net/ethernet/netronome/nfp/Makefile @@ -80,4 +80,6 @@ nfp-objs += \ abm/main.o endif +nfp-$(CONFIG_NFP_NET_IPSEC) += crypto/ipsec.o nfd3/ipsec.o + nfp-$(CONFIG_NFP_DEBUG) += nfp_net_debugfs.o diff --git a/drivers/net/ethernet/netronome/nfp/ccm_mbox.c b/drivers/net/ethernet/netronome/nfp/ccm_mbox.c index 4247bca09807..aa8aba4ff7aa 100644 --- a/drivers/net/ethernet/netronome/nfp/ccm_mbox.c +++ b/drivers/net/ethernet/netronome/nfp/ccm_mbox.c @@ -503,7 +503,7 @@ nfp_ccm_mbox_msg_prepare(struct nfp_net *nn, struct sk_buff *skb, max_len = max(max_reply_size, round_up(skb->len, 4)); if (max_len > mbox_max) { nn_dp_warn(&nn->dp, - "message too big for tha mailbox: %u/%u vs %u\n", + "message too big for the mailbox: %u/%u vs %u\n", skb->len, max_reply_size, mbox_max); return -EMSGSIZE; } diff --git a/drivers/net/ethernet/netronome/nfp/crypto/crypto.h b/drivers/net/ethernet/netronome/nfp/crypto/crypto.h index bffe58bb2f27..1df73d658938 100644 --- a/drivers/net/ethernet/netronome/nfp/crypto/crypto.h +++ b/drivers/net/ethernet/netronome/nfp/crypto/crypto.h @@ -39,4 +39,27 @@ nfp_net_tls_rx_resync_req(struct net_device *netdev, } #endif +/* IPsec related structures and functions */ +struct nfp_ipsec_offload { + u32 seq_hi; + u32 seq_low; + u32 handle; +}; + +#ifndef CONFIG_NFP_NET_IPSEC +static inline void nfp_net_ipsec_init(struct nfp_net *nn) +{ +} + +static inline void nfp_net_ipsec_clean(struct nfp_net *nn) +{ +} +#else +void nfp_net_ipsec_init(struct nfp_net *nn); +void nfp_net_ipsec_clean(struct nfp_net *nn); +bool nfp_net_ipsec_tx_prep(struct nfp_net_dp *dp, struct sk_buff *skb, + struct nfp_ipsec_offload *offload_info); +int nfp_net_ipsec_rx(struct nfp_meta_parsed *meta, struct sk_buff *skb); +#endif + #endif diff --git a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c new file mode 100644 index 000000000000..4632268695cb --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c @@ -0,0 +1,592 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2018 Netronome Systems, Inc */ +/* Copyright (C) 2021 Corigine, Inc */ + +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/netdevice.h> +#include <asm/unaligned.h> +#include <linux/ktime.h> +#include <net/xfrm.h> + +#include "../nfp_net_ctrl.h" +#include "../nfp_net.h" +#include "crypto.h" + +#define NFP_NET_IPSEC_MAX_SA_CNT (16 * 1024) /* Firmware support a maximum of 16K SA offload */ + +/* IPsec config message cmd codes */ +enum nfp_ipsec_cfg_mssg_cmd_codes { + NFP_IPSEC_CFG_MSSG_ADD_SA, /* Add a new SA */ + NFP_IPSEC_CFG_MSSG_INV_SA /* Invalidate an existing SA */ +}; + +/* IPsec config message response codes */ +enum nfp_ipsec_cfg_mssg_rsp_codes { + NFP_IPSEC_CFG_MSSG_OK, + NFP_IPSEC_CFG_MSSG_FAILED, + NFP_IPSEC_CFG_MSSG_SA_VALID, + NFP_IPSEC_CFG_MSSG_SA_HASH_ADD_FAILED, + NFP_IPSEC_CFG_MSSG_SA_HASH_DEL_FAILED, + NFP_IPSEC_CFG_MSSG_SA_INVALID_CMD +}; + +/* Protocol */ +enum nfp_ipsec_sa_prot { + NFP_IPSEC_PROTOCOL_AH = 0, + NFP_IPSEC_PROTOCOL_ESP = 1 +}; + +/* Mode */ +enum nfp_ipsec_sa_mode { + NFP_IPSEC_PROTMODE_TRANSPORT = 0, + NFP_IPSEC_PROTMODE_TUNNEL = 1 +}; + +/* Cipher types */ +enum nfp_ipsec_sa_cipher { + NFP_IPSEC_CIPHER_NULL, + NFP_IPSEC_CIPHER_3DES, + NFP_IPSEC_CIPHER_AES128, + NFP_IPSEC_CIPHER_AES192, + NFP_IPSEC_CIPHER_AES256, + NFP_IPSEC_CIPHER_AES128_NULL, + NFP_IPSEC_CIPHER_AES192_NULL, + NFP_IPSEC_CIPHER_AES256_NULL, + NFP_IPSEC_CIPHER_CHACHA20 +}; + +/* Cipher modes */ +enum nfp_ipsec_sa_cipher_mode { + NFP_IPSEC_CIMODE_ECB, + NFP_IPSEC_CIMODE_CBC, + NFP_IPSEC_CIMODE_CFB, + NFP_IPSEC_CIMODE_OFB, + NFP_IPSEC_CIMODE_CTR +}; + +/* Hash types */ +enum nfp_ipsec_sa_hash_type { + NFP_IPSEC_HASH_NONE, + NFP_IPSEC_HASH_MD5_96, + NFP_IPSEC_HASH_SHA1_96, + NFP_IPSEC_HASH_SHA256_96, + NFP_IPSEC_HASH_SHA384_96, + NFP_IPSEC_HASH_SHA512_96, + NFP_IPSEC_HASH_MD5_128, + NFP_IPSEC_HASH_SHA1_80, + NFP_IPSEC_HASH_SHA256_128, + NFP_IPSEC_HASH_SHA384_192, + NFP_IPSEC_HASH_SHA512_256, + NFP_IPSEC_HASH_GF128_128, + NFP_IPSEC_HASH_POLY1305_128 +}; + +/* IPSEC_CFG_MSSG_ADD_SA */ +struct nfp_ipsec_cfg_add_sa { + u32 ciph_key[8]; /* Cipher Key */ + union { + u32 auth_key[16]; /* Authentication Key */ + struct nfp_ipsec_aesgcm { /* AES-GCM-ESP fields */ + u32 salt; /* Initialized with SA */ + u32 resv[15]; + } aesgcm_fields; + }; + struct sa_ctrl_word { + uint32_t hash :4; /* From nfp_ipsec_sa_hash_type */ + uint32_t cimode :4; /* From nfp_ipsec_sa_cipher_mode */ + uint32_t cipher :4; /* From nfp_ipsec_sa_cipher */ + uint32_t mode :2; /* From nfp_ipsec_sa_mode */ + uint32_t proto :2; /* From nfp_ipsec_sa_prot */ + uint32_t dir :1; /* SA direction */ + uint32_t resv0 :12; + uint32_t encap_dsbl:1; /* Encap/Decap disable */ + uint32_t resv1 :2; /* Must be set to 0 */ + } ctrl_word; + u32 spi; /* SPI Value */ + uint32_t pmtu_limit :16; /* PMTU Limit */ + uint32_t resv0 :5; + uint32_t ipv6 :1; /* Outbound IPv6 addr format */ + uint32_t resv1 :10; + u32 resv2[2]; + u32 src_ip[4]; /* Src IP addr */ + u32 dst_ip[4]; /* Dst IP addr */ + u32 resv3[6]; +}; + +/* IPSEC_CFG_MSSG */ +struct nfp_ipsec_cfg_mssg { + union { + struct{ + uint32_t cmd:16; /* One of nfp_ipsec_cfg_mssg_cmd_codes */ + uint32_t rsp:16; /* One of nfp_ipsec_cfg_mssg_rsp_codes */ + uint32_t sa_idx:16; /* SA table index */ + uint32_t spare0:16; + struct nfp_ipsec_cfg_add_sa cfg_add_sa; + }; + u32 raw[64]; + }; +}; + +static int nfp_ipsec_cfg_cmd_issue(struct nfp_net *nn, int type, int saidx, + struct nfp_ipsec_cfg_mssg *msg) +{ + int i, msg_size, ret; + + msg->cmd = type; + msg->sa_idx = saidx; + msg->rsp = 0; + msg_size = ARRAY_SIZE(msg->raw); + + for (i = 0; i < msg_size; i++) + nn_writel(nn, NFP_NET_CFG_MBOX_VAL + 4 * i, msg->raw[i]); + + ret = nfp_net_mbox_reconfig(nn, NFP_NET_CFG_MBOX_CMD_IPSEC); + if (ret < 0) + return ret; + + /* For now we always read the whole message response back */ + for (i = 0; i < msg_size; i++) + msg->raw[i] = nn_readl(nn, NFP_NET_CFG_MBOX_VAL + 4 * i); + + switch (msg->rsp) { + case NFP_IPSEC_CFG_MSSG_OK: + return 0; + case NFP_IPSEC_CFG_MSSG_SA_INVALID_CMD: + return -EINVAL; + case NFP_IPSEC_CFG_MSSG_SA_VALID: + return -EEXIST; + case NFP_IPSEC_CFG_MSSG_FAILED: + case NFP_IPSEC_CFG_MSSG_SA_HASH_ADD_FAILED: + case NFP_IPSEC_CFG_MSSG_SA_HASH_DEL_FAILED: + return -EIO; + default: + return -EINVAL; + } +} + +static int set_aes_keylen(struct nfp_ipsec_cfg_add_sa *cfg, int alg, int keylen) +{ + bool aes_gmac = (alg == SADB_X_EALG_NULL_AES_GMAC); + + switch (keylen) { + case 128: + cfg->ctrl_word.cipher = aes_gmac ? NFP_IPSEC_CIPHER_AES128_NULL : + NFP_IPSEC_CIPHER_AES128; + break; + case 192: + cfg->ctrl_word.cipher = aes_gmac ? NFP_IPSEC_CIPHER_AES192_NULL : + NFP_IPSEC_CIPHER_AES192; + break; + case 256: + cfg->ctrl_word.cipher = aes_gmac ? NFP_IPSEC_CIPHER_AES256_NULL : + NFP_IPSEC_CIPHER_AES256; + break; + default: + return -EINVAL; + } + + return 0; +} + +static void set_md5hmac(struct nfp_ipsec_cfg_add_sa *cfg, int *trunc_len) +{ + switch (*trunc_len) { + case 96: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_MD5_96; + break; + case 128: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_MD5_128; + break; + default: + *trunc_len = 0; + } +} + +static void set_sha1hmac(struct nfp_ipsec_cfg_add_sa *cfg, int *trunc_len) +{ + switch (*trunc_len) { + case 96: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_SHA1_96; + break; + case 80: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_SHA1_80; + break; + default: + *trunc_len = 0; + } +} + +static void set_sha2_256hmac(struct nfp_ipsec_cfg_add_sa *cfg, int *trunc_len) +{ + switch (*trunc_len) { + case 96: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_SHA256_96; + break; + case 128: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_SHA256_128; + break; + default: + *trunc_len = 0; + } +} + +static void set_sha2_384hmac(struct nfp_ipsec_cfg_add_sa *cfg, int *trunc_len) +{ + switch (*trunc_len) { + case 96: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_SHA384_96; + break; + case 192: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_SHA384_192; + break; + default: + *trunc_len = 0; + } +} + +static void set_sha2_512hmac(struct nfp_ipsec_cfg_add_sa *cfg, int *trunc_len) +{ + switch (*trunc_len) { + case 96: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_SHA512_96; + break; + case 256: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_SHA512_256; + break; + default: + *trunc_len = 0; + } +} + +static int nfp_net_xfrm_add_state(struct xfrm_state *x) +{ + struct net_device *netdev = x->xso.dev; + struct nfp_ipsec_cfg_mssg msg = {}; + int i, key_len, trunc_len, err = 0; + struct nfp_ipsec_cfg_add_sa *cfg; + struct nfp_net *nn; + unsigned int saidx; + + nn = netdev_priv(netdev); + cfg = &msg.cfg_add_sa; + + /* General */ + switch (x->props.mode) { + case XFRM_MODE_TUNNEL: + cfg->ctrl_word.mode = NFP_IPSEC_PROTMODE_TUNNEL; + break; + case XFRM_MODE_TRANSPORT: + cfg->ctrl_word.mode = NFP_IPSEC_PROTMODE_TRANSPORT; + break; + default: + nn_err(nn, "Unsupported mode for xfrm offload\n"); + return -EINVAL; + } + + switch (x->id.proto) { + case IPPROTO_ESP: + cfg->ctrl_word.proto = NFP_IPSEC_PROTOCOL_ESP; + break; + case IPPROTO_AH: + cfg->ctrl_word.proto = NFP_IPSEC_PROTOCOL_AH; + break; + default: + nn_err(nn, "Unsupported protocol for xfrm offload\n"); + return -EINVAL; + } + + if (x->props.flags & XFRM_STATE_ESN) { + nn_err(nn, "Unsupported XFRM_REPLAY_MODE_ESN for xfrm offload\n"); + return -EINVAL; + } + + if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + nn_err(nn, "Unsupported xfrm offload tyoe\n"); + return -EINVAL; + } + + cfg->spi = ntohl(x->id.spi); + + /* Hash/Authentication */ + if (x->aalg) + trunc_len = x->aalg->alg_trunc_len; + else + trunc_len = 0; + + switch (x->props.aalgo) { + case SADB_AALG_NONE: + if (x->aead) { + trunc_len = -1; + } else { + nn_err(nn, "Unsupported authentication algorithm\n"); + return -EINVAL; + } + break; + case SADB_X_AALG_NULL: + cfg->ctrl_word.hash = NFP_IPSEC_HASH_NONE; + trunc_len = -1; + break; + case SADB_AALG_MD5HMAC: + set_md5hmac(cfg, &trunc_len); + break; + case SADB_AALG_SHA1HMAC: + set_sha1hmac(cfg, &trunc_len); + break; + case SADB_X_AALG_SHA2_256HMAC: + set_sha2_256hmac(cfg, &trunc_len); + break; + case SADB_X_AALG_SHA2_384HMAC: + set_sha2_384hmac(cfg, &trunc_len); + break; + case SADB_X_AALG_SHA2_512HMAC: + set_sha2_512hmac(cfg, &trunc_len); + break; + default: + nn_err(nn, "Unsupported authentication algorithm\n"); + return -EINVAL; + } + + if (!trunc_len) { + nn_err(nn, "Unsupported authentication algorithm trunc length\n"); + return -EINVAL; + } + + if (x->aalg) { + key_len = DIV_ROUND_UP(x->aalg->alg_key_len, BITS_PER_BYTE); + if (key_len > sizeof(cfg->auth_key)) { + nn_err(nn, "Insufficient space for offloaded auth key\n"); + return -EINVAL; + } + for (i = 0; i < key_len / sizeof(cfg->auth_key[0]) ; i++) + cfg->auth_key[i] = get_unaligned_be32(x->aalg->alg_key + + sizeof(cfg->auth_key[0]) * i); + } + + /* Encryption */ + switch (x->props.ealgo) { + case SADB_EALG_NONE: + case SADB_EALG_NULL: + cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CBC; + cfg->ctrl_word.cipher = NFP_IPSEC_CIPHER_NULL; + break; + case SADB_EALG_3DESCBC: + cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CBC; + cfg->ctrl_word.cipher = NFP_IPSEC_CIPHER_3DES; + break; + case SADB_X_EALG_AES_GCM_ICV16: + case SADB_X_EALG_NULL_AES_GMAC: + if (!x->aead) { + nn_err(nn, "Invalid AES key data\n"); + return -EINVAL; + } + + if (x->aead->alg_icv_len != 128) { + nn_err(nn, "ICV must be 128bit with SADB_X_EALG_AES_GCM_ICV16\n"); + return -EINVAL; + } + cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CTR; + cfg->ctrl_word.hash = NFP_IPSEC_HASH_GF128_128; + + /* Aead->alg_key_len includes 32-bit salt */ + if (set_aes_keylen(cfg, x->props.ealgo, x->aead->alg_key_len - 32)) { + nn_err(nn, "Unsupported AES key length %d\n", x->aead->alg_key_len); + return -EINVAL; + } + break; + case SADB_X_EALG_AESCBC: + cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CBC; + if (!x->ealg) { + nn_err(nn, "Invalid AES key data\n"); + return -EINVAL; + } + if (set_aes_keylen(cfg, x->props.ealgo, x->ealg->alg_key_len) < 0) { + nn_err(nn, "Unsupported AES key length %d\n", x->ealg->alg_key_len); + return -EINVAL; + } + break; + default: + nn_err(nn, "Unsupported encryption algorithm for offload\n"); + return -EINVAL; + } + + if (x->aead) { + int salt_len = 4; + + key_len = DIV_ROUND_UP(x->aead->alg_key_len, BITS_PER_BYTE); + key_len -= salt_len; + + if (key_len > sizeof(cfg->ciph_key)) { + nn_err(nn, "aead: Insufficient space for offloaded key\n"); + return -EINVAL; + } + + for (i = 0; i < key_len / sizeof(cfg->ciph_key[0]) ; i++) + cfg->ciph_key[i] = get_unaligned_be32(x->aead->alg_key + + sizeof(cfg->ciph_key[0]) * i); + + /* Load up the salt */ + cfg->aesgcm_fields.salt = get_unaligned_be32(x->aead->alg_key + key_len); + } + + if (x->ealg) { + key_len = DIV_ROUND_UP(x->ealg->alg_key_len, BITS_PER_BYTE); + + if (key_len > sizeof(cfg->ciph_key)) { + nn_err(nn, "ealg: Insufficient space for offloaded key\n"); + return -EINVAL; + } + for (i = 0; i < key_len / sizeof(cfg->ciph_key[0]) ; i++) + cfg->ciph_key[i] = get_unaligned_be32(x->ealg->alg_key + + sizeof(cfg->ciph_key[0]) * i); + } + + /* IP related info */ + switch (x->props.family) { + case AF_INET: + cfg->ipv6 = 0; + cfg->src_ip[0] = ntohl(x->props.saddr.a4); + cfg->dst_ip[0] = ntohl(x->id.daddr.a4); + break; + case AF_INET6: + cfg->ipv6 = 1; + for (i = 0; i < 4; i++) { + cfg->src_ip[i] = ntohl(x->props.saddr.a6[i]); + cfg->dst_ip[i] = ntohl(x->id.daddr.a6[i]); + } + break; + default: + nn_err(nn, "Unsupported address family\n"); + return -EINVAL; + } + + /* Maximum nic IPsec code could handle. Other limits may apply. */ + cfg->pmtu_limit = 0xffff; + cfg->ctrl_word.encap_dsbl = 1; + + /* SA direction */ + cfg->ctrl_word.dir = x->xso.dir; + + /* Find unused SA data*/ + err = xa_alloc(&nn->xa_ipsec, &saidx, x, + XA_LIMIT(0, NFP_NET_IPSEC_MAX_SA_CNT - 1), GFP_KERNEL); + if (err < 0) { + nn_err(nn, "Unable to get sa_data number for IPsec\n"); + return err; + } + + /* Allocate saidx and commit the SA */ + err = nfp_ipsec_cfg_cmd_issue(nn, NFP_IPSEC_CFG_MSSG_ADD_SA, saidx, &msg); + if (err) { + xa_erase(&nn->xa_ipsec, saidx); + nn_err(nn, "Failed to issue IPsec command err ret=%d\n", err); + return err; + } + + /* 0 is invalid offload_handle for kernel */ + x->xso.offload_handle = saidx + 1; + return 0; +} + +static void nfp_net_xfrm_del_state(struct xfrm_state *x) +{ + struct net_device *netdev = x->xso.dev; + struct nfp_ipsec_cfg_mssg msg; + struct nfp_net *nn; + int err; + + nn = netdev_priv(netdev); + err = nfp_ipsec_cfg_cmd_issue(nn, NFP_IPSEC_CFG_MSSG_INV_SA, + x->xso.offload_handle - 1, &msg); + if (err) + nn_warn(nn, "Failed to invalidate SA in hardware\n"); + + xa_erase(&nn->xa_ipsec, x->xso.offload_handle - 1); +} + +static bool nfp_net_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *x) +{ + if (x->props.family == AF_INET) + /* Offload with IPv4 options is not supported yet */ + return ip_hdr(skb)->ihl == 5; + + /* Offload with IPv6 extension headers is not support yet */ + return !(ipv6_ext_hdr(ipv6_hdr(skb)->nexthdr)); +} + +static const struct xfrmdev_ops nfp_net_ipsec_xfrmdev_ops = { + .xdo_dev_state_add = nfp_net_xfrm_add_state, + .xdo_dev_state_delete = nfp_net_xfrm_del_state, + .xdo_dev_offload_ok = nfp_net_ipsec_offload_ok, +}; + +void nfp_net_ipsec_init(struct nfp_net *nn) +{ + if (!(nn->cap_w1 & NFP_NET_CFG_CTRL_IPSEC)) + return; + + xa_init_flags(&nn->xa_ipsec, XA_FLAGS_ALLOC); + nn->dp.netdev->xfrmdev_ops = &nfp_net_ipsec_xfrmdev_ops; +} + +void nfp_net_ipsec_clean(struct nfp_net *nn) +{ + if (!(nn->cap_w1 & NFP_NET_CFG_CTRL_IPSEC)) + return; + + WARN_ON(!xa_empty(&nn->xa_ipsec)); + xa_destroy(&nn->xa_ipsec); +} + +bool nfp_net_ipsec_tx_prep(struct nfp_net_dp *dp, struct sk_buff *skb, + struct nfp_ipsec_offload *offload_info) +{ + struct xfrm_offload *xo = xfrm_offload(skb); + struct xfrm_state *x; + + x = xfrm_input_state(skb); + if (!x) + return false; + + offload_info->seq_hi = xo->seq.hi; + offload_info->seq_low = xo->seq.low; + offload_info->handle = x->xso.offload_handle; + + return true; +} + +int nfp_net_ipsec_rx(struct nfp_meta_parsed *meta, struct sk_buff *skb) +{ + struct net_device *netdev = skb->dev; + struct xfrm_offload *xo; + struct xfrm_state *x; + struct sec_path *sp; + struct nfp_net *nn; + u32 saidx; + + nn = netdev_priv(netdev); + + saidx = meta->ipsec_saidx - 1; + if (saidx >= NFP_NET_IPSEC_MAX_SA_CNT) + return -EINVAL; + + sp = secpath_set(skb); + if (unlikely(!sp)) + return -ENOMEM; + + xa_lock(&nn->xa_ipsec); + x = xa_load(&nn->xa_ipsec, saidx); + xa_unlock(&nn->xa_ipsec); + if (!x) + return -EINVAL; + + xfrm_state_hold(x); + sp->xvec[sp->len++] = x; + sp->olen++; + xo = xfrm_offload(skb); + xo->flags = CRYPTO_DONE; + xo->status = CRYPTO_SUCCESS; + + return 0; +} diff --git a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c index e92860e20a24..88d6d992e7d0 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c +++ b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c @@ -154,10 +154,11 @@ nfp_fl_lag_find_group_for_master_with_lag(struct nfp_fl_lag *lag, return NULL; } -int nfp_flower_lag_populate_pre_action(struct nfp_app *app, - struct net_device *master, - struct nfp_fl_pre_lag *pre_act, - struct netlink_ext_ack *extack) +static int nfp_fl_lag_get_group_info(struct nfp_app *app, + struct net_device *netdev, + __be16 *group_id, + u8 *batch_ver, + u8 *group_inst) { struct nfp_flower_priv *priv = app->priv; struct nfp_fl_lag_group *group = NULL; @@ -165,23 +166,52 @@ int nfp_flower_lag_populate_pre_action(struct nfp_app *app, mutex_lock(&priv->nfp_lag.lock); group = nfp_fl_lag_find_group_for_master_with_lag(&priv->nfp_lag, - master); + netdev); if (!group) { mutex_unlock(&priv->nfp_lag.lock); - NL_SET_ERR_MSG_MOD(extack, "invalid entry: group does not exist for LAG action"); return -ENOENT; } - pre_act->group_id = cpu_to_be16(group->group_id); - temp_vers = cpu_to_be32(priv->nfp_lag.batch_ver << - NFP_FL_PRE_LAG_VER_OFF); - memcpy(pre_act->lag_version, &temp_vers, 3); - pre_act->instance = group->group_inst; + if (group_id) + *group_id = cpu_to_be16(group->group_id); + + if (batch_ver) { + temp_vers = cpu_to_be32(priv->nfp_lag.batch_ver << + NFP_FL_PRE_LAG_VER_OFF); + memcpy(batch_ver, &temp_vers, 3); + } + + if (group_inst) + *group_inst = group->group_inst; + mutex_unlock(&priv->nfp_lag.lock); return 0; } +int nfp_flower_lag_populate_pre_action(struct nfp_app *app, + struct net_device *master, + struct nfp_fl_pre_lag *pre_act, + struct netlink_ext_ack *extack) +{ + if (nfp_fl_lag_get_group_info(app, master, &pre_act->group_id, + pre_act->lag_version, + &pre_act->instance)) { + NL_SET_ERR_MSG_MOD(extack, "invalid entry: group does not exist for LAG action"); + return -ENOENT; + } + + return 0; +} + +void nfp_flower_lag_get_info_from_netdev(struct nfp_app *app, + struct net_device *netdev, + struct nfp_tun_neigh_lag *lag) +{ + nfp_fl_lag_get_group_info(app, netdev, NULL, + lag->lag_version, &lag->lag_instance); +} + int nfp_flower_lag_get_output_id(struct nfp_app *app, struct net_device *master) { struct nfp_flower_priv *priv = app->priv; diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.c b/drivers/net/ethernet/netronome/nfp/flower/main.c index 4d960a9641b3..83eaa5ae3cd4 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/main.c +++ b/drivers/net/ethernet/netronome/nfp/flower/main.c @@ -76,7 +76,9 @@ nfp_flower_get_internal_port_id(struct nfp_app *app, struct net_device *netdev) u32 nfp_flower_get_port_id_from_netdev(struct nfp_app *app, struct net_device *netdev) { + struct nfp_flower_priv *priv = app->priv; int ext_port; + int gid; if (nfp_netdev_is_nfp_repr(netdev)) { return nfp_repr_get_port_id(netdev); @@ -86,6 +88,13 @@ u32 nfp_flower_get_port_id_from_netdev(struct nfp_app *app, return 0; return nfp_flower_internal_port_get_port_id(ext_port); + } else if (netif_is_lag_master(netdev) && + priv->flower_ext_feats & NFP_FL_FEATS_TUNNEL_NEIGH_LAG) { + gid = nfp_flower_lag_get_output_id(app, netdev); + if (gid < 0) + return 0; + + return (NFP_FL_LAG_OUT | gid); } return 0; diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h index cb799d18682d..40372545148e 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/main.h +++ b/drivers/net/ethernet/netronome/nfp/flower/main.h @@ -52,6 +52,7 @@ struct nfp_app; #define NFP_FL_FEATS_QOS_PPS BIT(9) #define NFP_FL_FEATS_QOS_METER BIT(10) #define NFP_FL_FEATS_DECAP_V2 BIT(11) +#define NFP_FL_FEATS_TUNNEL_NEIGH_LAG BIT(12) #define NFP_FL_FEATS_HOST_ACK BIT(31) #define NFP_FL_ENABLE_FLOW_MERGE BIT(0) @@ -69,7 +70,8 @@ struct nfp_app; NFP_FL_FEATS_VLAN_QINQ | \ NFP_FL_FEATS_QOS_PPS | \ NFP_FL_FEATS_QOS_METER | \ - NFP_FL_FEATS_DECAP_V2) + NFP_FL_FEATS_DECAP_V2 | \ + NFP_FL_FEATS_TUNNEL_NEIGH_LAG) struct nfp_fl_mask_id { struct circ_buf mask_id_free_list; @@ -104,6 +106,16 @@ struct nfp_fl_tunnel_offloads { }; /** + * struct nfp_tun_neigh_lag - lag info + * @lag_version: lag version + * @lag_instance: lag instance + */ +struct nfp_tun_neigh_lag { + u8 lag_version[3]; + u8 lag_instance; +}; + +/** * struct nfp_tun_neigh - basic neighbour data * @dst_addr: Destination MAC address * @src_addr: Source MAC address @@ -133,12 +145,14 @@ struct nfp_tun_neigh_ext { * @src_ipv4: Source IPv4 address * @common: Neighbour/route common info * @ext: Neighbour/route extended info + * @lag: lag port info */ struct nfp_tun_neigh_v4 { __be32 dst_ipv4; __be32 src_ipv4; struct nfp_tun_neigh common; struct nfp_tun_neigh_ext ext; + struct nfp_tun_neigh_lag lag; }; /** @@ -147,12 +161,14 @@ struct nfp_tun_neigh_v4 { * @src_ipv6: Source IPv6 address * @common: Neighbour/route common info * @ext: Neighbour/route extended info + * @lag: lag port info */ struct nfp_tun_neigh_v6 { struct in6_addr dst_ipv6; struct in6_addr src_ipv6; struct nfp_tun_neigh common; struct nfp_tun_neigh_ext ext; + struct nfp_tun_neigh_lag lag; }; /** @@ -647,6 +663,9 @@ int nfp_flower_lag_populate_pre_action(struct nfp_app *app, struct netlink_ext_ack *extack); int nfp_flower_lag_get_output_id(struct nfp_app *app, struct net_device *master); +void nfp_flower_lag_get_info_from_netdev(struct nfp_app *app, + struct net_device *netdev, + struct nfp_tun_neigh_lag *lag); void nfp_flower_qos_init(struct nfp_app *app); void nfp_flower_qos_cleanup(struct nfp_app *app); int nfp_flower_setup_qos_offload(struct nfp_app *app, struct net_device *netdev, diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c index 52f67157bd0f..a8678d5612ee 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c +++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c @@ -290,6 +290,11 @@ nfp_flower_xmit_tun_conf(struct nfp_app *app, u8 mtype, u16 plen, void *pdata, mtype == NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6)) plen -= sizeof(struct nfp_tun_neigh_ext); + if (!(priv->flower_ext_feats & NFP_FL_FEATS_TUNNEL_NEIGH_LAG) && + (mtype == NFP_FLOWER_CMSG_TYPE_TUN_NEIGH || + mtype == NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6)) + plen -= sizeof(struct nfp_tun_neigh_lag); + skb = nfp_flower_cmsg_alloc(app, plen, mtype, flag); if (!skb) return -ENOMEM; @@ -468,6 +473,7 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app, neigh_table_params); if (!nn_entry && !neigh_invalid) { struct nfp_tun_neigh_ext *ext; + struct nfp_tun_neigh_lag *lag; struct nfp_tun_neigh *common; nn_entry = kzalloc(sizeof(*nn_entry) + neigh_size, @@ -488,6 +494,7 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app, payload->dst_ipv6 = flowi6->daddr; common = &payload->common; ext = &payload->ext; + lag = &payload->lag; mtype = NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6; } else { struct flowi4 *flowi4 = (struct flowi4 *)flow; @@ -498,6 +505,7 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app, payload->dst_ipv4 = flowi4->daddr; common = &payload->common; ext = &payload->ext; + lag = &payload->lag; mtype = NFP_FLOWER_CMSG_TYPE_TUN_NEIGH; } ext->host_ctx = cpu_to_be32(U32_MAX); @@ -505,6 +513,9 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app, ext->vlan_tci = cpu_to_be16(U16_MAX); ether_addr_copy(common->src_addr, netdev->dev_addr); neigh_ha_snapshot(common->dst_addr, neigh, netdev); + + if ((port_id & NFP_FL_LAG_OUT) == NFP_FL_LAG_OUT) + nfp_flower_lag_get_info_from_netdev(app, netdev, lag); common->port_id = cpu_to_be32(port_id); if (rhashtable_insert_fast(&priv->neigh_table, @@ -547,13 +558,38 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app, if (nn_entry->flow) list_del(&nn_entry->list_head); kfree(nn_entry); - } else if (nn_entry && !neigh_invalid && override) { - mtype = is_ipv6 ? NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6 : - NFP_FLOWER_CMSG_TYPE_TUN_NEIGH; - nfp_tun_link_predt_entries(app, nn_entry); - nfp_flower_xmit_tun_conf(app, mtype, neigh_size, - nn_entry->payload, - GFP_ATOMIC); + } else if (nn_entry && !neigh_invalid) { + struct nfp_tun_neigh *common; + u8 dst_addr[ETH_ALEN]; + bool is_mac_change; + + if (is_ipv6) { + struct nfp_tun_neigh_v6 *payload; + + payload = (struct nfp_tun_neigh_v6 *)nn_entry->payload; + common = &payload->common; + mtype = NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6; + } else { + struct nfp_tun_neigh_v4 *payload; + + payload = (struct nfp_tun_neigh_v4 *)nn_entry->payload; + common = &payload->common; + mtype = NFP_FLOWER_CMSG_TYPE_TUN_NEIGH; + } + + ether_addr_copy(dst_addr, common->dst_addr); + neigh_ha_snapshot(common->dst_addr, neigh, netdev); + is_mac_change = !ether_addr_equal(dst_addr, common->dst_addr); + if (override || is_mac_change) { + if (is_mac_change && nn_entry->flow) { + list_del(&nn_entry->list_head); + nn_entry->flow = NULL; + } + nfp_tun_link_predt_entries(app, nn_entry); + nfp_flower_xmit_tun_conf(app, mtype, neigh_size, + nn_entry->payload, + GFP_ATOMIC); + } } spin_unlock_bh(&priv->predt_lock); @@ -593,8 +629,7 @@ nfp_tun_neigh_event_handler(struct notifier_block *nb, unsigned long event, app_priv = container_of(nb, struct nfp_flower_priv, tun.neigh_nb); app = app_priv->app; - if (!nfp_netdev_is_nfp_repr(n->dev) && - !nfp_flower_internal_port_can_offload(app, n->dev)) + if (!nfp_flower_get_port_id_from_netdev(app, n->dev)) return NOTIFY_DONE; #if IS_ENABLED(CONFIG_INET) diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c index 448c1c1afaee..861082c5dbff 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c +++ b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c @@ -4,6 +4,7 @@ #include <linux/bpf_trace.h> #include <linux/netdevice.h> #include <linux/bitfield.h> +#include <net/xfrm.h> #include "../nfp_app.h" #include "../nfp_net.h" @@ -167,28 +168,34 @@ nfp_nfd3_tx_csum(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, u64_stats_update_end(&r_vec->tx_sync); } -static int nfp_nfd3_prep_tx_meta(struct nfp_net_dp *dp, struct sk_buff *skb, u64 tls_handle) +static int nfp_nfd3_prep_tx_meta(struct nfp_net_dp *dp, struct sk_buff *skb, + u64 tls_handle, bool *ipsec) { struct metadata_dst *md_dst = skb_metadata_dst(skb); + struct nfp_ipsec_offload offload_info; unsigned char *data; bool vlan_insert; u32 meta_id = 0; int md_bytes; - if (unlikely(md_dst || tls_handle)) { - if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX)) - md_dst = NULL; - } +#ifdef CONFIG_NFP_NET_IPSEC + if (xfrm_offload(skb)) + *ipsec = nfp_net_ipsec_tx_prep(dp, skb, &offload_info); +#endif + + if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX)) + md_dst = NULL; vlan_insert = skb_vlan_tag_present(skb) && (dp->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2); - if (!(md_dst || tls_handle || vlan_insert)) + if (!(md_dst || tls_handle || vlan_insert || *ipsec)) return 0; md_bytes = sizeof(meta_id) + !!md_dst * NFP_NET_META_PORTID_SIZE + !!tls_handle * NFP_NET_META_CONN_HANDLE_SIZE + - vlan_insert * NFP_NET_META_VLAN_SIZE; + vlan_insert * NFP_NET_META_VLAN_SIZE + + *ipsec * NFP_NET_META_IPSEC_FIELD_SIZE; /* IPsec has 12 bytes of metadata */ if (unlikely(skb_cow_head(skb, md_bytes))) return -ENOMEM; @@ -218,6 +225,19 @@ static int nfp_nfd3_prep_tx_meta(struct nfp_net_dp *dp, struct sk_buff *skb, u64 meta_id <<= NFP_NET_META_FIELD_SIZE; meta_id |= NFP_NET_META_VLAN; } + if (*ipsec) { + /* IPsec has three consecutive 4-bit IPsec metadata types, + * so in total IPsec has three 4 bytes of metadata. + */ + data -= NFP_NET_META_IPSEC_SIZE; + put_unaligned_be32(offload_info.seq_hi, data); + data -= NFP_NET_META_IPSEC_SIZE; + put_unaligned_be32(offload_info.seq_low, data); + data -= NFP_NET_META_IPSEC_SIZE; + put_unaligned_be32(offload_info.handle - 1, data); + meta_id <<= NFP_NET_META_IPSEC_FIELD_SIZE; + meta_id |= NFP_NET_META_IPSEC << 8 | NFP_NET_META_IPSEC << 4 | NFP_NET_META_IPSEC; + } data -= sizeof(meta_id); put_unaligned_be32(meta_id, data); @@ -246,6 +266,7 @@ netdev_tx_t nfp_nfd3_tx(struct sk_buff *skb, struct net_device *netdev) dma_addr_t dma_addr; unsigned int fsize; u64 tls_handle = 0; + bool ipsec = false; u16 qidx; dp = &nn->dp; @@ -273,7 +294,7 @@ netdev_tx_t nfp_nfd3_tx(struct sk_buff *skb, struct net_device *netdev) return NETDEV_TX_OK; } - md_bytes = nfp_nfd3_prep_tx_meta(dp, skb, tls_handle); + md_bytes = nfp_nfd3_prep_tx_meta(dp, skb, tls_handle, &ipsec); if (unlikely(md_bytes < 0)) goto err_flush; @@ -312,6 +333,8 @@ netdev_tx_t nfp_nfd3_tx(struct sk_buff *skb, struct net_device *netdev) txd->vlan = cpu_to_le16(skb_vlan_tag_get(skb)); } + if (ipsec) + nfp_nfd3_ipsec_tx(txd, skb); /* Gather DMA */ if (nr_frags > 0) { __le64 second_half; @@ -764,6 +787,15 @@ nfp_nfd3_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta, return false; data += sizeof(struct nfp_net_tls_resync_req); break; +#ifdef CONFIG_NFP_NET_IPSEC + case NFP_NET_META_IPSEC: + /* Note: IPsec packet will have zero saidx, so need add 1 + * to indicate packet is IPsec packet within driver. + */ + meta->ipsec_saidx = get_unaligned_be32(data) + 1; + data += 4; + break; +#endif default: return true; } @@ -876,12 +908,11 @@ static int nfp_nfd3_rx(struct nfp_net_rx_ring *rx_ring, int budget) struct nfp_net_dp *dp = &r_vec->nfp_net->dp; struct nfp_net_tx_ring *tx_ring; struct bpf_prog *xdp_prog; + int idx, pkts_polled = 0; bool xdp_tx_cmpl = false; unsigned int true_bufsz; struct sk_buff *skb; - int pkts_polled = 0; struct xdp_buff xdp; - int idx; xdp_prog = READ_ONCE(dp->xdp_prog); true_bufsz = xdp_prog ? PAGE_SIZE : dp->fl_bufsz; @@ -1081,6 +1112,13 @@ static int nfp_nfd3_rx(struct nfp_net_rx_ring *rx_ring, int budget) continue; } +#ifdef CONFIG_NFP_NET_IPSEC + if (meta.ipsec_saidx != 0 && unlikely(nfp_net_ipsec_rx(&meta, skb))) { + nfp_nfd3_rx_drop(dp, r_vec, rx_ring, NULL, skb); + continue; + } +#endif + if (meta_len_xdp) skb_metadata_set(skb, meta_len_xdp); diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/ipsec.c b/drivers/net/ethernet/netronome/nfp/nfd3/ipsec.c new file mode 100644 index 000000000000..e90f8c975903 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfd3/ipsec.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2018 Netronome Systems, Inc */ +/* Copyright (C) 2021 Corigine, Inc */ + +#include <net/xfrm.h> + +#include "../nfp_net.h" +#include "nfd3.h" + +void nfp_nfd3_ipsec_tx(struct nfp_nfd3_tx_desc *txd, struct sk_buff *skb) +{ + struct xfrm_state *x = xfrm_input_state(skb); + + if (x->xso.dev && (x->xso.dev->features & NETIF_F_HW_ESP_TX_CSUM)) { + txd->flags |= NFD3_DESC_TX_CSUM | NFD3_DESC_TX_IP4_CSUM | + NFD3_DESC_TX_TCP_CSUM | NFD3_DESC_TX_UDP_CSUM; + } +} diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h b/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h index 7a0df9e6c3c4..9c1c10dcbaee 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h +++ b/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h @@ -103,4 +103,12 @@ void nfp_nfd3_rx_ring_fill_freelist(struct nfp_net_dp *dp, void nfp_nfd3_xsk_tx_free(struct nfp_nfd3_tx_buf *txbuf); int nfp_nfd3_xsk_poll(struct napi_struct *napi, int budget); +#ifndef CONFIG_NFP_NET_IPSEC +static inline void nfp_nfd3_ipsec_tx(struct nfp_nfd3_tx_desc *txd, struct sk_buff *skb) +{ +} +#else +void nfp_nfd3_ipsec_tx(struct nfp_nfd3_tx_desc *txd, struct sk_buff *skb); +#endif + #endif diff --git a/drivers/net/ethernet/netronome/nfp/nfp_app.h b/drivers/net/ethernet/netronome/nfp/nfp_app.h index dd56207df246..90707346a4ef 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_app.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_app.h @@ -445,6 +445,4 @@ int nfp_app_nic_vnic_alloc(struct nfp_app *app, struct nfp_net *nn, int nfp_app_nic_vnic_init_phy_port(struct nfp_pf *pf, struct nfp_app *app, struct nfp_net *nn, unsigned int id); -struct devlink_port *nfp_devlink_get_devlink_port(struct net_device *netdev); - #endif diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c index cb08d7bf9524..bf6bae557158 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c @@ -239,10 +239,6 @@ nfp_devlink_info_get(struct devlink *devlink, struct devlink_info_req *req, char *buf = NULL; int err; - err = devlink_info_driver_name_put(req, "nfp"); - if (err) - return err; - vendor = nfp_hwinfo_lookup(pf->hwinfo, "assembly.vendor"); part = nfp_hwinfo_lookup(pf->hwinfo, "assembly.partno"); sn = nfp_hwinfo_lookup(pf->hwinfo, "assembly.serial"); @@ -334,6 +330,8 @@ int nfp_devlink_port_register(struct nfp_app *app, struct nfp_port *port) int serial_len; int ret; + SET_NETDEV_DEVLINK_PORT(port->netdev, &port->dl_port); + rtnl_lock(); ret = nfp_devlink_fill_eth_port(port, ð_port); rtnl_unlock(); @@ -360,24 +358,3 @@ void nfp_devlink_port_unregister(struct nfp_port *port) { devl_port_unregister(&port->dl_port); } - -void nfp_devlink_port_type_eth_set(struct nfp_port *port) -{ - devlink_port_type_eth_set(&port->dl_port, port->netdev); -} - -void nfp_devlink_port_type_clear(struct nfp_port *port) -{ - devlink_port_type_clear(&port->dl_port); -} - -struct devlink_port *nfp_devlink_get_devlink_port(struct net_device *netdev) -{ - struct nfp_port *port; - - port = nfp_port_from_netdev(netdev); - if (!port) - return NULL; - - return &port->dl_port; -} diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.h b/drivers/net/ethernet/netronome/nfp/nfp_main.h index afd3edfa2428..14a751bfe1fe 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_main.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_main.h @@ -12,7 +12,6 @@ #include <linux/ethtool.h> #include <linux/list.h> #include <linux/types.h> -#include <linux/msi.h> #include <linux/pci.h> #include <linux/workqueue.h> #include <net/devlink.h> @@ -28,6 +27,7 @@ struct nfp_hwinfo; struct nfp_mip; struct nfp_net; struct nfp_nsp_identify; +struct nfp_eth_media_buf; struct nfp_port; struct nfp_rtsym; struct nfp_rtsym_table; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index a101ff30a1ae..da33f09facb9 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -88,6 +88,9 @@ #define NFP_NET_FL_BATCH 16 /* Add freelist in this Batch size */ #define NFP_NET_XDP_MAX_COMPLETE 2048 /* XDP bufs to reclaim in NAPI poll */ +/* MC definitions */ +#define NFP_NET_CFG_MAC_MC_MAX 1024 /* The maximum number of MC address per port*/ + /* Offload definitions */ #define NFP_NET_N_VXLAN_PORTS (NFP_NET_CFG_VXLAN_SZ / sizeof(__be16)) @@ -263,6 +266,10 @@ struct nfp_meta_parsed { u8 tpid; u16 tci; } vlan; + +#ifdef CONFIG_NFP_NET_IPSEC + u32 ipsec_saidx; +#endif }; struct nfp_net_rx_hash { @@ -472,6 +479,7 @@ struct nfp_stat_pair { * @rx_dma_off: Offset at which DMA packets (for XDP headroom) * @rx_offset: Offset in the RX buffers where packet data starts * @ctrl: Local copy of the control register/word. + * @ctrl_w1: Local copy of the control register/word1. * @fl_bufsz: Currently configured size of the freelist buffers * @xdp_prog: Installed XDP program * @tx_rings: Array of pre-allocated TX ring structures @@ -504,6 +512,7 @@ struct nfp_net_dp { u32 rx_dma_off; u32 ctrl; + u32 ctrl_w1; u32 fl_bufsz; struct bpf_prog *xdp_prog; @@ -541,6 +550,7 @@ struct nfp_net_dp { * @id: vNIC id within the PF (0 for VFs) * @fw_ver: Firmware version * @cap: Capabilities advertised by the Firmware + * @cap_w1: Extended capabilities word advertised by the Firmware * @max_mtu: Maximum support MTU advertised by the Firmware * @rss_hfunc: RSS selected hash function * @rss_cfg: RSS configuration @@ -583,6 +593,7 @@ struct nfp_net_dp { * @qcp_cfg: Pointer to QCP queue used for configuration notification * @tx_bar: Pointer to mapped TX queues * @rx_bar: Pointer to mapped FL/RX queues + * @xa_ipsec: IPsec xarray SA data * @tlv_caps: Parsed TLV capabilities * @ktls_tx_conn_cnt: Number of offloaded kTLS TX connections * @ktls_rx_conn_cnt: Number of offloaded kTLS RX connections @@ -617,6 +628,7 @@ struct nfp_net { u32 id; u32 cap; + u32 cap_w1; u32 max_mtu; u8 rss_hfunc; @@ -670,6 +682,10 @@ struct nfp_net { u8 __iomem *tx_bar; u8 __iomem *rx_bar; +#ifdef CONFIG_NFP_NET_IPSEC + struct xarray xa_ipsec; +#endif + struct nfp_net_tlv_caps tlv_caps; unsigned int ktls_tx_conn_cnt; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 27f4786ace4f..2314cf55e821 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -27,7 +27,6 @@ #include <linux/page_ref.h> #include <linux/pci.h> #include <linux/pci_regs.h> -#include <linux/msi.h> #include <linux/ethtool.h> #include <linux/log2.h> #include <linux/if_vlan.h> @@ -735,8 +734,9 @@ static unsigned int nfp_net_calc_fl_bufsz_xsk(struct nfp_net_dp *dp) */ static void nfp_net_vecs_init(struct nfp_net *nn) { + int numa_node = dev_to_node(&nn->pdev->dev); struct nfp_net_r_vector *r_vec; - int r; + unsigned int r; nn->lsc_handler = nfp_net_irq_lsc; nn->exn_handler = nfp_net_irq_exn; @@ -762,7 +762,7 @@ static void nfp_net_vecs_init(struct nfp_net *nn) tasklet_disable(&r_vec->tasklet); } - cpumask_set_cpu(r, &r_vec->affinity_mask); + cpumask_set_cpu(cpumask_local_spread(r, numa_node), &r_vec->affinity_mask); } } @@ -1007,6 +1007,7 @@ static int nfp_net_set_config_and_enable(struct nfp_net *nn) new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG; nn_writel(nn, NFP_NET_CFG_CTRL, new_ctrl); + nn_writel(nn, NFP_NET_CFG_CTRL_WORD1, nn->dp.ctrl_w1); err = nfp_net_reconfig(nn, update); if (err) { nfp_net_clear_config_and_disable(nn); @@ -1333,18 +1334,59 @@ err_unlock: return err; } +static int nfp_net_mc_cfg(struct net_device *netdev, const unsigned char *addr, const u32 cmd) +{ + struct nfp_net *nn = netdev_priv(netdev); + int ret; + + ret = nfp_net_mbox_lock(nn, NFP_NET_CFG_MULTICAST_SZ); + if (ret) + return ret; + + nn_writel(nn, nn->tlv_caps.mbox_off + NFP_NET_CFG_MULTICAST_MAC_HI, + get_unaligned_be32(addr)); + nn_writew(nn, nn->tlv_caps.mbox_off + NFP_NET_CFG_MULTICAST_MAC_LO, + get_unaligned_be16(addr + 4)); + + return nfp_net_mbox_reconfig_and_unlock(nn, cmd); +} + +static int nfp_net_mc_sync(struct net_device *netdev, const unsigned char *addr) +{ + struct nfp_net *nn = netdev_priv(netdev); + + if (netdev_mc_count(netdev) > NFP_NET_CFG_MAC_MC_MAX) { + nn_err(nn, "Requested number of MC addresses (%d) exceeds maximum (%d).\n", + netdev_mc_count(netdev), NFP_NET_CFG_MAC_MC_MAX); + return -EINVAL; + } + + return nfp_net_mc_cfg(netdev, addr, NFP_NET_CFG_MBOX_CMD_MULTICAST_ADD); +} + +static int nfp_net_mc_unsync(struct net_device *netdev, const unsigned char *addr) +{ + return nfp_net_mc_cfg(netdev, addr, NFP_NET_CFG_MBOX_CMD_MULTICAST_DEL); +} + static void nfp_net_set_rx_mode(struct net_device *netdev) { struct nfp_net *nn = netdev_priv(netdev); - u32 new_ctrl; + u32 new_ctrl, new_ctrl_w1; new_ctrl = nn->dp.ctrl; + new_ctrl_w1 = nn->dp.ctrl_w1; if (!netdev_mc_empty(netdev) || netdev->flags & IFF_ALLMULTI) new_ctrl |= nn->cap & NFP_NET_CFG_CTRL_L2MC; else new_ctrl &= ~NFP_NET_CFG_CTRL_L2MC; + if (netdev->flags & IFF_ALLMULTI) + new_ctrl_w1 &= ~NFP_NET_CFG_CTRL_MCAST_FILTER; + else + new_ctrl_w1 |= nn->cap_w1 & NFP_NET_CFG_CTRL_MCAST_FILTER; + if (netdev->flags & IFF_PROMISC) { if (nn->cap & NFP_NET_CFG_CTRL_PROMISC) new_ctrl |= NFP_NET_CFG_CTRL_PROMISC; @@ -1354,13 +1396,21 @@ static void nfp_net_set_rx_mode(struct net_device *netdev) new_ctrl &= ~NFP_NET_CFG_CTRL_PROMISC; } - if (new_ctrl == nn->dp.ctrl) + if ((nn->cap_w1 & NFP_NET_CFG_CTRL_MCAST_FILTER) && + __dev_mc_sync(netdev, nfp_net_mc_sync, nfp_net_mc_unsync)) + netdev_err(netdev, "Sync mc address failed\n"); + + if (new_ctrl == nn->dp.ctrl && new_ctrl_w1 == nn->dp.ctrl_w1) return; - nn_writel(nn, NFP_NET_CFG_CTRL, new_ctrl); + if (new_ctrl != nn->dp.ctrl) + nn_writel(nn, NFP_NET_CFG_CTRL, new_ctrl); + if (new_ctrl_w1 != nn->dp.ctrl_w1) + nn_writel(nn, NFP_NET_CFG_CTRL_WORD1, new_ctrl_w1); nfp_net_reconfig_post(nn, NFP_NET_CFG_UPDATE_GEN); nn->dp.ctrl = new_ctrl; + nn->dp.ctrl_w1 = new_ctrl_w1; } static void nfp_net_rss_init_itbl(struct nfp_net *nn) @@ -1631,21 +1681,21 @@ static void nfp_net_stat64(struct net_device *netdev, unsigned int start; do { - start = u64_stats_fetch_begin_irq(&r_vec->rx_sync); + start = u64_stats_fetch_begin(&r_vec->rx_sync); data[0] = r_vec->rx_pkts; data[1] = r_vec->rx_bytes; data[2] = r_vec->rx_drops; - } while (u64_stats_fetch_retry_irq(&r_vec->rx_sync, start)); + } while (u64_stats_fetch_retry(&r_vec->rx_sync, start)); stats->rx_packets += data[0]; stats->rx_bytes += data[1]; stats->rx_dropped += data[2]; do { - start = u64_stats_fetch_begin_irq(&r_vec->tx_sync); + start = u64_stats_fetch_begin(&r_vec->tx_sync); data[0] = r_vec->tx_pkts; data[1] = r_vec->tx_bytes; data[2] = r_vec->tx_errors; - } while (u64_stats_fetch_retry_irq(&r_vec->tx_sync, start)); + } while (u64_stats_fetch_retry(&r_vec->tx_sync, start)); stats->tx_packets += data[0]; stats->tx_bytes += data[1]; stats->tx_errors += data[2]; @@ -2013,7 +2063,6 @@ const struct net_device_ops nfp_nfd3_netdev_ops = { .ndo_get_phys_port_name = nfp_net_get_phys_port_name, .ndo_bpf = nfp_net_xdp, .ndo_xsk_wakeup = nfp_net_xsk_wakeup, - .ndo_get_devlink_port = nfp_devlink_get_devlink_port, .ndo_bridge_getlink = nfp_net_bridge_getlink, .ndo_bridge_setlink = nfp_net_bridge_setlink, }; @@ -2044,7 +2093,6 @@ const struct net_device_ops nfp_nfdk_netdev_ops = { .ndo_features_check = nfp_net_features_check, .ndo_get_phys_port_name = nfp_net_get_phys_port_name, .ndo_bpf = nfp_net_xdp, - .ndo_get_devlink_port = nfp_devlink_get_devlink_port, .ndo_bridge_getlink = nfp_net_bridge_getlink, .ndo_bridge_setlink = nfp_net_bridge_setlink, }; @@ -2094,7 +2142,7 @@ void nfp_net_info(struct nfp_net *nn) nn->fw_ver.extend, nn->fw_ver.class, nn->fw_ver.major, nn->fw_ver.minor, nn->max_mtu); - nn_info(nn, "CAP: %#x %s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n", + nn_info(nn, "CAP: %#x %s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n", nn->cap, nn->cap & NFP_NET_CFG_CTRL_PROMISC ? "PROMISC " : "", nn->cap & NFP_NET_CFG_CTRL_L2BC ? "L2BCFILT " : "", @@ -2122,6 +2170,7 @@ void nfp_net_info(struct nfp_net *nn) nn->cap & NFP_NET_CFG_CTRL_CSUM_COMPLETE ? "RXCSUM_COMPLETE " : "", nn->cap & NFP_NET_CFG_CTRL_LIVE_ADDR ? "LIVE_ADDR " : "", + nn->cap_w1 & NFP_NET_CFG_CTRL_MCAST_FILTER ? "MULTICAST_FILTER " : "", nfp_app_extra_cap(nn->app, nn)); } @@ -2373,6 +2422,12 @@ static void nfp_net_netdev_init(struct nfp_net *nn) } if (nn->cap & NFP_NET_CFG_CTRL_RSS_ANY) netdev->hw_features |= NETIF_F_RXHASH; + +#ifdef CONFIG_NFP_NET_IPSEC + if (nn->cap_w1 & NFP_NET_CFG_CTRL_IPSEC) + netdev->hw_features |= NETIF_F_HW_ESP | NETIF_F_HW_ESP_TX_CSUM; +#endif + if (nn->cap & NFP_NET_CFG_CTRL_VXLAN) { if (nn->cap & NFP_NET_CFG_CTRL_LSO) { netdev->hw_features |= NETIF_F_GSO_UDP_TUNNEL | @@ -2454,6 +2509,7 @@ static int nfp_net_read_caps(struct nfp_net *nn) { /* Get some of the read-only fields from the BAR */ nn->cap = nn_readl(nn, NFP_NET_CFG_CAP); + nn->cap_w1 = nn_readq(nn, NFP_NET_CFG_CAP_WORD1); nn->max_mtu = nn_readl(nn, NFP_NET_CFG_MAX_MTU); /* ABI 4.x and ctrl vNIC always use chained metadata, in other cases @@ -2543,6 +2599,9 @@ int nfp_net_init(struct nfp_net *nn) if (nn->cap & NFP_NET_CFG_CTRL_TXRWB) nn->dp.ctrl |= NFP_NET_CFG_CTRL_TXRWB; + if (nn->cap_w1 & NFP_NET_CFG_CTRL_MCAST_FILTER) + nn->dp.ctrl_w1 |= NFP_NET_CFG_CTRL_MCAST_FILTER; + /* Stash the re-configuration queue away. First odd queue in TX Bar */ nn->qcp_cfg = nn->tx_bar + NFP_QCP_QUEUE_ADDR_SZ; @@ -2550,6 +2609,7 @@ int nfp_net_init(struct nfp_net *nn) nn_writel(nn, NFP_NET_CFG_CTRL, 0); nn_writeq(nn, NFP_NET_CFG_TXRS_ENABLE, 0); nn_writeq(nn, NFP_NET_CFG_RXRS_ENABLE, 0); + nn_writel(nn, NFP_NET_CFG_CTRL_WORD1, 0); err = nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_RING | NFP_NET_CFG_UPDATE_GEN); if (err) @@ -2565,6 +2625,8 @@ int nfp_net_init(struct nfp_net *nn) err = nfp_net_tls_init(nn); if (err) goto err_clean_mbox; + + nfp_net_ipsec_init(nn); } nfp_net_vecs_init(nn); @@ -2588,6 +2650,7 @@ void nfp_net_clean(struct nfp_net *nn) return; unregister_netdev(nn->dp.netdev); + nfp_net_ipsec_clean(nn); nfp_ccm_mbox_clean(nn); nfp_net_reconfig_wait_posted(nn); } diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h index 6714d5e8fdab..51124309ae1f 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h @@ -48,6 +48,7 @@ #define NFP_NET_META_CSUM 6 /* checksum complete type */ #define NFP_NET_META_CONN_HANDLE 7 #define NFP_NET_META_RESYNC_INFO 8 /* RX resync info request */ +#define NFP_NET_META_IPSEC 9 /* IPsec SA index for tx and rx */ #define NFP_META_PORT_ID_CTRL ~0U @@ -55,6 +56,8 @@ #define NFP_NET_META_VLAN_SIZE 4 #define NFP_NET_META_PORTID_SIZE 4 #define NFP_NET_META_CONN_HANDLE_SIZE 8 +#define NFP_NET_META_IPSEC_SIZE 4 +#define NFP_NET_META_IPSEC_FIELD_SIZE 12 /* Hash type pre-pended when a RSS hash was computed */ #define NFP_NET_RSS_NONE 0 #define NFP_NET_RSS_IPV4 1 @@ -257,10 +260,20 @@ #define NFP_NET_CFG_BPF_CFG_MASK 7ULL #define NFP_NET_CFG_BPF_ADDR_MASK (~NFP_NET_CFG_BPF_CFG_MASK) -/* 40B reserved for future use (0x0098 - 0x00c0) +/* 3 words reserved for extended ctrl words (0x0098 - 0x00a4) + * 3 words reserved for extended cap words (0x00a4 - 0x00b0) + * Currently only one word is used, can be extended in future. */ -#define NFP_NET_CFG_RESERVED 0x0098 -#define NFP_NET_CFG_RESERVED_SZ 0x0028 +#define NFP_NET_CFG_CTRL_WORD1 0x0098 +#define NFP_NET_CFG_CTRL_PKT_TYPE (0x1 << 0) /* Pkttype offload */ +#define NFP_NET_CFG_CTRL_IPSEC (0x1 << 1) /* IPsec offload */ +#define NFP_NET_CFG_CTRL_MCAST_FILTER (0x1 << 2) /* Multicast Filter */ + +#define NFP_NET_CFG_CAP_WORD1 0x00a4 + +/* 16B reserved for future use (0x00b0 - 0x00c0) */ +#define NFP_NET_CFG_RESERVED 0x00b0 +#define NFP_NET_CFG_RESERVED_SZ 0x0010 /* RSS configuration (0x0100 - 0x01ac): * Used only when NFP_NET_CFG_CTRL_RSS is enabled @@ -390,17 +403,20 @@ */ #define NFP_NET_CFG_MBOX_BASE 0x1800 #define NFP_NET_CFG_MBOX_VAL_MAX_SZ 0x1F8 - +#define NFP_NET_CFG_MBOX_VAL 0x1808 #define NFP_NET_CFG_MBOX_SIMPLE_CMD 0x0 #define NFP_NET_CFG_MBOX_SIMPLE_RET 0x4 #define NFP_NET_CFG_MBOX_SIMPLE_VAL 0x8 #define NFP_NET_CFG_MBOX_CMD_CTAG_FILTER_ADD 1 #define NFP_NET_CFG_MBOX_CMD_CTAG_FILTER_KILL 2 - +#define NFP_NET_CFG_MBOX_CMD_IPSEC 3 #define NFP_NET_CFG_MBOX_CMD_PCI_DSCP_PRIOMAP_SET 5 #define NFP_NET_CFG_MBOX_CMD_TLV_CMSG 6 +#define NFP_NET_CFG_MBOX_CMD_MULTICAST_ADD 8 +#define NFP_NET_CFG_MBOX_CMD_MULTICAST_DEL 9 + /* VLAN filtering using general use mailbox * %NFP_NET_CFG_VLAN_FILTER: Base address of VLAN filter mailbox * %NFP_NET_CFG_VLAN_FILTER_VID: VLAN ID to filter @@ -412,6 +428,17 @@ #define NFP_NET_CFG_VLAN_FILTER_PROTO (NFP_NET_CFG_VLAN_FILTER + 2) #define NFP_NET_CFG_VLAN_FILTER_SZ 0x0004 +/* Multicast filtering using general use mailbox + * %NFP_NET_CFG_MULTICAST: Base address of Multicast filter mailbox + * %NFP_NET_CFG_MULTICAST_MAC_HI: High 32-bits of Multicast MAC address + * %NFP_NET_CFG_MULTICAST_MAC_LO: Low 16-bits of Multicast MAC address + * %NFP_NET_CFG_MULTICAST_SZ: Size of the Multicast filter mailbox in bytes + */ +#define NFP_NET_CFG_MULTICAST NFP_NET_CFG_MBOX_SIMPLE_VAL +#define NFP_NET_CFG_MULTICAST_MAC_HI NFP_NET_CFG_MULTICAST +#define NFP_NET_CFG_MULTICAST_MAC_LO (NFP_NET_CFG_MULTICAST + 6) +#define NFP_NET_CFG_MULTICAST_SZ 0x0006 + /* TLV capabilities * %NFP_NET_CFG_TLV_TYPE: Offset of type within the TLV * %NFP_NET_CFG_TLV_TYPE_REQUIRED: Driver must be able to parse the TLV diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c index 991059d6cb32..a4a89ef3f18b 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c @@ -293,6 +293,76 @@ nfp_net_set_fec_link_mode(struct nfp_eth_table_port *eth_port, } } +static const u16 nfp_eth_media_table[] = { + [NFP_MEDIA_1000BASE_CX] = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, + [NFP_MEDIA_1000BASE_KX] = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, + [NFP_MEDIA_10GBASE_KX4] = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, + [NFP_MEDIA_10GBASE_KR] = ETHTOOL_LINK_MODE_10000baseKR_Full_BIT, + [NFP_MEDIA_10GBASE_CX4] = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, + [NFP_MEDIA_10GBASE_CR] = ETHTOOL_LINK_MODE_10000baseCR_Full_BIT, + [NFP_MEDIA_10GBASE_SR] = ETHTOOL_LINK_MODE_10000baseSR_Full_BIT, + [NFP_MEDIA_10GBASE_ER] = ETHTOOL_LINK_MODE_10000baseER_Full_BIT, + [NFP_MEDIA_25GBASE_KR] = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, + [NFP_MEDIA_25GBASE_KR_S] = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, + [NFP_MEDIA_25GBASE_CR] = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, + [NFP_MEDIA_25GBASE_CR_S] = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, + [NFP_MEDIA_25GBASE_SR] = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT, + [NFP_MEDIA_40GBASE_CR4] = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT, + [NFP_MEDIA_40GBASE_KR4] = ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT, + [NFP_MEDIA_40GBASE_SR4] = ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT, + [NFP_MEDIA_40GBASE_LR4] = ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT, + [NFP_MEDIA_50GBASE_KR] = ETHTOOL_LINK_MODE_50000baseKR_Full_BIT, + [NFP_MEDIA_50GBASE_SR] = ETHTOOL_LINK_MODE_50000baseSR_Full_BIT, + [NFP_MEDIA_50GBASE_CR] = ETHTOOL_LINK_MODE_50000baseCR_Full_BIT, + [NFP_MEDIA_50GBASE_LR] = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, + [NFP_MEDIA_50GBASE_ER] = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, + [NFP_MEDIA_50GBASE_FR] = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, + [NFP_MEDIA_100GBASE_KR4] = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, + [NFP_MEDIA_100GBASE_SR4] = ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT, + [NFP_MEDIA_100GBASE_CR4] = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, + [NFP_MEDIA_100GBASE_KP4] = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, + [NFP_MEDIA_100GBASE_CR10] = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, +}; + +static void nfp_add_media_link_mode(struct nfp_port *port, + struct nfp_eth_table_port *eth_port, + struct ethtool_link_ksettings *cmd) +{ + u64 supported_modes[2], advertised_modes[2]; + struct nfp_eth_media_buf ethm = { + .eth_index = eth_port->eth_index, + }; + struct nfp_cpp *cpp = port->app->cpp; + + if (nfp_eth_read_media(cpp, ðm)) + return; + + for (u32 i = 0; i < 2; i++) { + supported_modes[i] = le64_to_cpu(ethm.supported_modes[i]); + advertised_modes[i] = le64_to_cpu(ethm.advertised_modes[i]); + } + + for (u32 i = 0; i < NFP_MEDIA_LINK_MODES_NUMBER; i++) { + if (i < 64) { + if (supported_modes[0] & BIT_ULL(i)) + __set_bit(nfp_eth_media_table[i], + cmd->link_modes.supported); + + if (advertised_modes[0] & BIT_ULL(i)) + __set_bit(nfp_eth_media_table[i], + cmd->link_modes.advertising); + } else { + if (supported_modes[1] & BIT_ULL(i - 64)) + __set_bit(nfp_eth_media_table[i], + cmd->link_modes.supported); + + if (advertised_modes[1] & BIT_ULL(i - 64)) + __set_bit(nfp_eth_media_table[i], + cmd->link_modes.advertising); + } + } +} + /** * nfp_net_get_link_ksettings - Get Link Speed settings * @netdev: network interface device structure @@ -311,6 +381,8 @@ nfp_net_get_link_ksettings(struct net_device *netdev, u16 sts; /* Init to unknowns */ + ethtool_link_ksettings_zero_link_mode(cmd, supported); + ethtool_link_ksettings_zero_link_mode(cmd, advertising); ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE); cmd->base.port = PORT_OTHER; cmd->base.speed = SPEED_UNKNOWN; @@ -321,6 +393,7 @@ nfp_net_get_link_ksettings(struct net_device *netdev, if (eth_port) { ethtool_link_ksettings_add_link_mode(cmd, supported, Pause); ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); + nfp_add_media_link_mode(port, eth_port, cmd); if (eth_port->supp_aneg) { ethtool_link_ksettings_add_link_mode(cmd, supported, Autoneg); if (eth_port->aneg == NFP_ANEG_AUTO) { @@ -686,7 +759,7 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data) unsigned int start; do { - start = u64_stats_fetch_begin_irq(&nn->r_vecs[i].rx_sync); + start = u64_stats_fetch_begin(&nn->r_vecs[i].rx_sync); data[0] = nn->r_vecs[i].rx_pkts; tmp[0] = nn->r_vecs[i].hw_csum_rx_ok; tmp[1] = nn->r_vecs[i].hw_csum_rx_inner_ok; @@ -694,10 +767,10 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data) tmp[3] = nn->r_vecs[i].hw_csum_rx_error; tmp[4] = nn->r_vecs[i].rx_replace_buf_alloc_fail; tmp[5] = nn->r_vecs[i].hw_tls_rx; - } while (u64_stats_fetch_retry_irq(&nn->r_vecs[i].rx_sync, start)); + } while (u64_stats_fetch_retry(&nn->r_vecs[i].rx_sync, start)); do { - start = u64_stats_fetch_begin_irq(&nn->r_vecs[i].tx_sync); + start = u64_stats_fetch_begin(&nn->r_vecs[i].tx_sync); data[1] = nn->r_vecs[i].tx_pkts; data[2] = nn->r_vecs[i].tx_busy; tmp[6] = nn->r_vecs[i].hw_csum_tx; @@ -707,7 +780,7 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data) tmp[10] = nn->r_vecs[i].hw_tls_tx; tmp[11] = nn->r_vecs[i].tls_tx_fallback; tmp[12] = nn->r_vecs[i].tls_tx_no_fallback; - } while (u64_stats_fetch_retry_irq(&nn->r_vecs[i].tx_sync, start)); + } while (u64_stats_fetch_retry(&nn->r_vecs[i].tx_sync, start)); data += NN_RVEC_PER_Q_STATS; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c index 3bae92dc899e..abfe788d558f 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c @@ -16,7 +16,6 @@ #include <linux/lockdep.h> #include <linux/pci.h> #include <linux/pci_regs.h> -#include <linux/msi.h> #include <linux/random.h> #include <linux/rtnetlink.h> @@ -156,22 +155,17 @@ nfp_net_pf_init_vnic(struct nfp_pf *pf, struct nfp_net *nn, unsigned int id) nfp_net_debugfs_vnic_add(nn, pf->ddir); - if (nn->port) - nfp_devlink_port_type_eth_set(nn->port); - nfp_net_info(nn); if (nfp_net_is_data_vnic(nn)) { err = nfp_app_vnic_init(pf->app, nn); if (err) - goto err_devlink_port_type_clean; + goto err_debugfs_vnic_clean; } return 0; -err_devlink_port_type_clean: - if (nn->port) - nfp_devlink_port_type_clear(nn->port); +err_debugfs_vnic_clean: nfp_net_debugfs_dir_clean(&nn->debugfs_dir); nfp_net_clean(nn); err_devlink_port_clean: @@ -220,8 +214,6 @@ static void nfp_net_pf_clean_vnic(struct nfp_pf *pf, struct nfp_net *nn) { if (nfp_net_is_data_vnic(nn)) nfp_app_vnic_clean(pf->app, nn); - if (nn->port) - nfp_devlink_port_type_clear(nn->port); nfp_net_debugfs_dir_clean(&nn->debugfs_dir); nfp_net_clean(nn); if (nn->port) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c index 8b77582bdfa0..3af1229a3f08 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c @@ -134,13 +134,13 @@ nfp_repr_get_host_stats64(const struct net_device *netdev, repr_stats = per_cpu_ptr(repr->stats, i); do { - start = u64_stats_fetch_begin_irq(&repr_stats->syncp); + start = u64_stats_fetch_begin(&repr_stats->syncp); tbytes = repr_stats->tx_bytes; tpkts = repr_stats->tx_packets; tdrops = repr_stats->tx_drops; rbytes = repr_stats->rx_bytes; rpkts = repr_stats->rx_packets; - } while (u64_stats_fetch_retry_irq(&repr_stats->syncp, start)); + } while (u64_stats_fetch_retry(&repr_stats->syncp, start)); stats->tx_bytes += tbytes; stats->tx_packets += tpkts; @@ -275,7 +275,6 @@ const struct net_device_ops nfp_repr_netdev_ops = { .ndo_set_features = nfp_port_set_features, .ndo_set_mac_address = eth_mac_addr, .ndo_get_port_parent_id = nfp_port_get_port_parent_id, - .ndo_get_devlink_port = nfp_devlink_get_devlink_port, }; void diff --git a/drivers/net/ethernet/netronome/nfp/nfp_port.h b/drivers/net/ethernet/netronome/nfp/nfp_port.h index 6793cdf9ff11..f8cd157ca1d7 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_port.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_port.h @@ -129,8 +129,6 @@ int nfp_net_refresh_port_table_sync(struct nfp_pf *pf); int nfp_devlink_port_register(struct nfp_app *app, struct nfp_port *port); void nfp_devlink_port_unregister(struct nfp_port *port); -void nfp_devlink_port_type_eth_set(struct nfp_port *port); -void nfp_devlink_port_type_clear(struct nfp_port *port); /* Mac stats (0x0000 - 0x0200) * all counters are 64bit. diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c index 730fea214b8a..7136bc48530b 100644 --- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c +++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c @@ -100,6 +100,7 @@ enum nfp_nsp_cmd { SPCODE_FW_LOADED = 19, /* Is application firmware loaded */ SPCODE_VERSIONS = 21, /* Report FW versions */ SPCODE_READ_SFF_EEPROM = 22, /* Read module EEPROM */ + SPCODE_READ_MEDIA = 23, /* Get either the supported or advertised media for a port */ }; struct nfp_nsp_dma_buf { @@ -1100,4 +1101,20 @@ int nfp_nsp_read_module_eeprom(struct nfp_nsp *state, int eth_index, kfree(buf); return ret; +}; + +int nfp_nsp_read_media(struct nfp_nsp *state, void *buf, unsigned int size) +{ + struct nfp_nsp_command_buf_arg media = { + { + .code = SPCODE_READ_MEDIA, + .option = size, + }, + .in_buf = buf, + .in_size = size, + .out_buf = buf, + .out_size = size, + }; + + return nfp_nsp_command_buf(state, &media); } diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h index 992d72ac98d3..8f5cab0032d0 100644 --- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h +++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h @@ -65,6 +65,11 @@ static inline bool nfp_nsp_has_read_module_eeprom(struct nfp_nsp *state) return nfp_nsp_get_abi_ver_minor(state) > 28; } +static inline bool nfp_nsp_has_read_media(struct nfp_nsp *state) +{ + return nfp_nsp_get_abi_ver_minor(state) > 33; +} + enum nfp_eth_interface { NFP_INTERFACE_NONE = 0, NFP_INTERFACE_SFP = 1, @@ -97,6 +102,47 @@ enum nfp_eth_fec { NFP_FEC_DISABLED_BIT, }; +/* link modes about RJ45 haven't been used, so there's no mapping to them */ +enum nfp_ethtool_link_mode_list { + NFP_MEDIA_W0_RJ45_10M, + NFP_MEDIA_W0_RJ45_10M_HD, + NFP_MEDIA_W0_RJ45_100M, + NFP_MEDIA_W0_RJ45_100M_HD, + NFP_MEDIA_W0_RJ45_1G, + NFP_MEDIA_W0_RJ45_2P5G, + NFP_MEDIA_W0_RJ45_5G, + NFP_MEDIA_W0_RJ45_10G, + NFP_MEDIA_1000BASE_CX, + NFP_MEDIA_1000BASE_KX, + NFP_MEDIA_10GBASE_KX4, + NFP_MEDIA_10GBASE_KR, + NFP_MEDIA_10GBASE_CX4, + NFP_MEDIA_10GBASE_CR, + NFP_MEDIA_10GBASE_SR, + NFP_MEDIA_10GBASE_ER, + NFP_MEDIA_25GBASE_KR, + NFP_MEDIA_25GBASE_KR_S, + NFP_MEDIA_25GBASE_CR, + NFP_MEDIA_25GBASE_CR_S, + NFP_MEDIA_25GBASE_SR, + NFP_MEDIA_40GBASE_CR4, + NFP_MEDIA_40GBASE_KR4, + NFP_MEDIA_40GBASE_SR4, + NFP_MEDIA_40GBASE_LR4, + NFP_MEDIA_50GBASE_KR, + NFP_MEDIA_50GBASE_SR, + NFP_MEDIA_50GBASE_CR, + NFP_MEDIA_50GBASE_LR, + NFP_MEDIA_50GBASE_ER, + NFP_MEDIA_50GBASE_FR, + NFP_MEDIA_100GBASE_KR4, + NFP_MEDIA_100GBASE_SR4, + NFP_MEDIA_100GBASE_CR4, + NFP_MEDIA_100GBASE_KP4, + NFP_MEDIA_100GBASE_CR10, + NFP_MEDIA_LINK_MODES_NUMBER +}; + #define NFP_FEC_AUTO BIT(NFP_FEC_AUTO_BIT) #define NFP_FEC_BASER BIT(NFP_FEC_BASER_BIT) #define NFP_FEC_REED_SOLOMON BIT(NFP_FEC_REED_SOLOMON_BIT) @@ -256,6 +302,16 @@ enum nfp_nsp_sensor_id { int nfp_hwmon_read_sensor(struct nfp_cpp *cpp, enum nfp_nsp_sensor_id id, long *val); +struct nfp_eth_media_buf { + u8 eth_index; + u8 reserved[7]; + __le64 supported_modes[2]; + __le64 advertised_modes[2]; +}; + +int nfp_nsp_read_media(struct nfp_nsp *state, void *buf, unsigned int size); +int nfp_eth_read_media(struct nfp_cpp *cpp, struct nfp_eth_media_buf *ethm); + #define NFP_NSP_VERSION_BUFSZ 1024 /* reasonable size, not in the ABI */ enum nfp_nsp_versions { diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c index bb64efec4c46..570ac1bb2122 100644 --- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c +++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c @@ -647,3 +647,29 @@ int __nfp_eth_set_split(struct nfp_nsp *nsp, unsigned int lanes) return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_PORT, NSP_ETH_PORT_LANES, lanes, NSP_ETH_CTRL_SET_LANES); } + +int nfp_eth_read_media(struct nfp_cpp *cpp, struct nfp_eth_media_buf *ethm) +{ + struct nfp_nsp *nsp; + int ret; + + nsp = nfp_nsp_open(cpp); + if (IS_ERR(nsp)) { + nfp_err(cpp, "Failed to access the NSP: %pe\n", nsp); + return PTR_ERR(nsp); + } + + if (!nfp_nsp_has_read_media(nsp)) { + nfp_warn(cpp, "Reading media link modes not supported. Please update flash\n"); + ret = -EOPNOTSUPP; + goto exit_close_nsp; + } + + ret = nfp_nsp_read_media(nsp, ethm, sizeof(*ethm)); + if (ret) + nfp_err(cpp, "Reading media link modes failed: %pe\n", ERR_PTR(ret)); + +exit_close_nsp: + nfp_nsp_close(nsp); + return ret; +} |