enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
|
|
|
|
/* Copyright 2017-2019 NXP */
|
|
|
|
|
2023-02-06 11:45:31 +02:00
|
|
|
#include <linux/ethtool_netlink.h>
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
#include <linux/net_tstamp.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include "enetc.h"
|
|
|
|
|
|
|
|
static const u32 enetc_si_regs[] = {
|
|
|
|
ENETC_SIMR, ENETC_SIPMAR0, ENETC_SIPMAR1, ENETC_SICBDRMR,
|
|
|
|
ENETC_SICBDRSR, ENETC_SICBDRBAR0, ENETC_SICBDRBAR1, ENETC_SICBDRPIR,
|
|
|
|
ENETC_SICBDRCIR, ENETC_SICBDRLENR, ENETC_SICAPR0, ENETC_SICAPR1,
|
|
|
|
ENETC_SIUEFDCR
|
|
|
|
};
|
|
|
|
|
|
|
|
static const u32 enetc_txbdr_regs[] = {
|
|
|
|
ENETC_TBMR, ENETC_TBSR, ENETC_TBBAR0, ENETC_TBBAR1,
|
2020-07-21 10:55:21 +03:00
|
|
|
ENETC_TBPIR, ENETC_TBCIR, ENETC_TBLENR, ENETC_TBIER, ENETC_TBICR0,
|
|
|
|
ENETC_TBICR1
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static const u32 enetc_rxbdr_regs[] = {
|
|
|
|
ENETC_RBMR, ENETC_RBSR, ENETC_RBBSR, ENETC_RBCIR, ENETC_RBBAR0,
|
2020-07-21 10:55:21 +03:00
|
|
|
ENETC_RBBAR1, ENETC_RBPIR, ENETC_RBLENR, ENETC_RBIER, ENETC_RBICR0,
|
|
|
|
ENETC_RBICR1
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static const u32 enetc_port_regs[] = {
|
|
|
|
ENETC_PMR, ENETC_PSR, ENETC_PSIPMR, ENETC_PSIPMAR0(0),
|
|
|
|
ENETC_PSIPMAR1(0), ENETC_PTXMBAR, ENETC_PCAPR0, ENETC_PCAPR1,
|
2019-01-22 15:29:57 +02:00
|
|
|
ENETC_PSICFGR0(0), ENETC_PRFSCAPR, ENETC_PTCMSDUR(0),
|
|
|
|
ENETC_PM0_CMD_CFG, ENETC_PM0_MAXFRM, ENETC_PM0_IF_MODE
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
};
|
|
|
|
|
2023-04-18 14:14:54 +03:00
|
|
|
static const u32 enetc_port_mm_regs[] = {
|
|
|
|
ENETC_MMCSR, ENETC_PFPMR, ENETC_PTCFPR(0), ENETC_PTCFPR(1),
|
|
|
|
ENETC_PTCFPR(2), ENETC_PTCFPR(3), ENETC_PTCFPR(4), ENETC_PTCFPR(5),
|
|
|
|
ENETC_PTCFPR(6), ENETC_PTCFPR(7),
|
|
|
|
};
|
|
|
|
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
static int enetc_get_reglen(struct net_device *ndev)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
|
|
|
int len;
|
|
|
|
|
|
|
|
len = ARRAY_SIZE(enetc_si_regs);
|
|
|
|
len += ARRAY_SIZE(enetc_txbdr_regs) * priv->num_tx_rings;
|
|
|
|
len += ARRAY_SIZE(enetc_rxbdr_regs) * priv->num_rx_rings;
|
|
|
|
|
|
|
|
if (hw->port)
|
|
|
|
len += ARRAY_SIZE(enetc_port_regs);
|
|
|
|
|
2023-04-18 14:14:54 +03:00
|
|
|
if (hw->port && !!(priv->si->hw_features & ENETC_SI_F_QBU))
|
|
|
|
len += ARRAY_SIZE(enetc_port_mm_regs);
|
|
|
|
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
len *= sizeof(u32) * 2; /* store 2 entries per reg: addr and value */
|
|
|
|
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void enetc_get_regs(struct net_device *ndev, struct ethtool_regs *regs,
|
|
|
|
void *regbuf)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
|
|
|
u32 *buf = (u32 *)regbuf;
|
|
|
|
int i, j;
|
|
|
|
u32 addr;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(enetc_si_regs); i++) {
|
|
|
|
*buf++ = enetc_si_regs[i];
|
|
|
|
*buf++ = enetc_rd(hw, enetc_si_regs[i]);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < priv->num_tx_rings; i++) {
|
|
|
|
for (j = 0; j < ARRAY_SIZE(enetc_txbdr_regs); j++) {
|
|
|
|
addr = ENETC_BDR(TX, i, enetc_txbdr_regs[j]);
|
|
|
|
|
|
|
|
*buf++ = addr;
|
|
|
|
*buf++ = enetc_rd(hw, addr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < priv->num_rx_rings; i++) {
|
|
|
|
for (j = 0; j < ARRAY_SIZE(enetc_rxbdr_regs); j++) {
|
|
|
|
addr = ENETC_BDR(RX, i, enetc_rxbdr_regs[j]);
|
|
|
|
|
|
|
|
*buf++ = addr;
|
|
|
|
*buf++ = enetc_rd(hw, addr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!hw->port)
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(enetc_port_regs); i++) {
|
|
|
|
addr = ENETC_PORT_BASE + enetc_port_regs[i];
|
|
|
|
*buf++ = addr;
|
|
|
|
*buf++ = enetc_rd(hw, addr);
|
|
|
|
}
|
2023-04-18 14:14:54 +03:00
|
|
|
|
|
|
|
if (priv->si->hw_features & ENETC_SI_F_QBU) {
|
|
|
|
for (i = 0; i < ARRAY_SIZE(enetc_port_mm_regs); i++) {
|
|
|
|
addr = ENETC_PORT_BASE + enetc_port_mm_regs[i];
|
|
|
|
*buf++ = addr;
|
|
|
|
*buf++ = enetc_rd(hw, addr);
|
|
|
|
}
|
|
|
|
}
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
}
|
|
|
|
|
2019-01-22 15:29:55 +02:00
|
|
|
static const struct {
|
|
|
|
int reg;
|
|
|
|
char name[ETH_GSTRING_LEN];
|
|
|
|
} enetc_si_counters[] = {
|
|
|
|
{ ENETC_SIROCT, "SI rx octets" },
|
|
|
|
{ ENETC_SIRFRM, "SI rx frames" },
|
|
|
|
{ ENETC_SIRUCA, "SI rx u-cast frames" },
|
|
|
|
{ ENETC_SIRMCA, "SI rx m-cast frames" },
|
|
|
|
{ ENETC_SITOCT, "SI tx octets" },
|
|
|
|
{ ENETC_SITFRM, "SI tx frames" },
|
|
|
|
{ ENETC_SITUCA, "SI tx u-cast frames" },
|
|
|
|
{ ENETC_SITMCA, "SI tx m-cast frames" },
|
|
|
|
{ ENETC_RBDCR(0), "Rx ring 0 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(1), "Rx ring 1 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(2), "Rx ring 2 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(3), "Rx ring 3 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(4), "Rx ring 4 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(5), "Rx ring 5 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(6), "Rx ring 6 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(7), "Rx ring 7 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(8), "Rx ring 8 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(9), "Rx ring 9 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(10), "Rx ring 10 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(11), "Rx ring 11 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(12), "Rx ring 12 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(13), "Rx ring 13 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(14), "Rx ring 14 discarded frames" },
|
|
|
|
{ ENETC_RBDCR(15), "Rx ring 15 discarded frames" },
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct {
|
|
|
|
int reg;
|
net: ethtool: Adjust exactly ETH_GSTRING_LEN-long stats to use memcpy
Many drivers populate the stats buffer using C-String based APIs (e.g.
ethtool_sprintf() and ethtool_puts()), usually when building up the
list of stats individually (i.e. with a for() loop). This, however,
requires that the source strings be populated in such a way as to have
a terminating NUL byte in the source.
Other drivers populate the stats buffer directly using one big memcpy()
of an entire array of strings. No NUL termination is needed here, as the
bytes are being directly passed through. Yet others will build up the
stats buffer individually, but also use memcpy(). This, too, does not
need NUL termination of the source strings.
However, there are cases where the strings that populate the
source stats strings are exactly ETH_GSTRING_LEN long, and GCC
15's -Wunterminated-string-initialization option complains that the
trailing NUL byte has been truncated. This situation is fine only if the
driver is using the memcpy() approach. If the C-String APIs are used,
the destination string name will have its final byte truncated by the
required trailing NUL byte applied by the C-string API.
For drivers that are already using memcpy() but have initializers that
truncate the NUL terminator, mark their source strings as __nonstring to
silence the GCC warnings.
For drivers that have initializers that truncate the NUL terminator and
are using the C-String APIs, switch to memcpy() to avoid destination
string truncation and mark their source strings as __nonstring to silence
the GCC warnings. (Also introduce ethtool_cpy() as a helper to make this
an easy replacement).
Specifically the following warnings were investigated and addressed:
../drivers/net/ethernet/chelsio/cxgb/cxgb2.c:364:9: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
364 | "TxFramesAbortedDueToXSCollisions",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ethernet/freescale/enetc/enetc_ethtool.c:165:33: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
165 | { ENETC_PM_R1523X(0), "MAC rx 1523 to max-octet packets" },
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ethernet/freescale/enetc/enetc_ethtool.c:190:33: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
190 | { ENETC_PM_T1523X(0), "MAC tx 1523 to max-octet packets" },
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ethernet/google/gve/gve_ethtool.c:76:9: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
76 | "adminq_dcfg_device_resources_cnt", "adminq_set_driver_parameter_cnt",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c:117:53: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
117 | STMMAC_STAT(ptp_rx_msg_type_pdelay_follow_up),
| ^
../drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c:46:12: note: in definition of macro 'STMMAC_STAT'
46 | { #m, sizeof_field(struct stmmac_extra_stats, m), \
| ^
../drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c:328:24: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
328 | .str = "a_mac_control_frames_transmitted",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c:340:24: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
340 | .str = "a_pause_mac_ctrl_frames_received",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Signed-off-by: Kees Cook <kees@kernel.org>
Reviewed-by: Petr Machata <petrm@nvidia.com> # for mlxsw
Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com>
Link: https://patch.msgid.link/20250416010210.work.904-kees@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-15 18:02:15 -07:00
|
|
|
char name[ETH_GSTRING_LEN] __nonstring;
|
2025-06-27 10:11:07 +08:00
|
|
|
} enetc_pm_counters[] = {
|
2022-09-09 14:37:59 +03:00
|
|
|
{ ENETC_PM_REOCT(0), "MAC rx ethernet octets" },
|
|
|
|
{ ENETC_PM_RALN(0), "MAC rx alignment errors" },
|
|
|
|
{ ENETC_PM_RXPF(0), "MAC rx valid pause frames" },
|
|
|
|
{ ENETC_PM_RFRM(0), "MAC rx valid frames" },
|
|
|
|
{ ENETC_PM_RFCS(0), "MAC rx fcs errors" },
|
|
|
|
{ ENETC_PM_RVLAN(0), "MAC rx VLAN frames" },
|
|
|
|
{ ENETC_PM_RERR(0), "MAC rx frame errors" },
|
|
|
|
{ ENETC_PM_RUCA(0), "MAC rx unicast frames" },
|
|
|
|
{ ENETC_PM_RMCA(0), "MAC rx multicast frames" },
|
|
|
|
{ ENETC_PM_RBCA(0), "MAC rx broadcast frames" },
|
|
|
|
{ ENETC_PM_RDRP(0), "MAC rx dropped packets" },
|
|
|
|
{ ENETC_PM_RPKT(0), "MAC rx packets" },
|
|
|
|
{ ENETC_PM_RUND(0), "MAC rx undersized packets" },
|
|
|
|
{ ENETC_PM_R64(0), "MAC rx 64 byte packets" },
|
|
|
|
{ ENETC_PM_R127(0), "MAC rx 65-127 byte packets" },
|
|
|
|
{ ENETC_PM_R255(0), "MAC rx 128-255 byte packets" },
|
|
|
|
{ ENETC_PM_R511(0), "MAC rx 256-511 byte packets" },
|
|
|
|
{ ENETC_PM_R1023(0), "MAC rx 512-1023 byte packets" },
|
|
|
|
{ ENETC_PM_R1522(0), "MAC rx 1024-1522 byte packets" },
|
|
|
|
{ ENETC_PM_R1523X(0), "MAC rx 1523 to max-octet packets" },
|
|
|
|
{ ENETC_PM_ROVR(0), "MAC rx oversized packets" },
|
|
|
|
{ ENETC_PM_RJBR(0), "MAC rx jabber packets" },
|
|
|
|
{ ENETC_PM_RFRG(0), "MAC rx fragment packets" },
|
|
|
|
{ ENETC_PM_RCNP(0), "MAC rx control packets" },
|
|
|
|
{ ENETC_PM_RDRNTP(0), "MAC rx fifo drop" },
|
|
|
|
{ ENETC_PM_TEOCT(0), "MAC tx ethernet octets" },
|
|
|
|
{ ENETC_PM_TOCT(0), "MAC tx octets" },
|
|
|
|
{ ENETC_PM_TCRSE(0), "MAC tx carrier sense errors" },
|
|
|
|
{ ENETC_PM_TXPF(0), "MAC tx valid pause frames" },
|
|
|
|
{ ENETC_PM_TFRM(0), "MAC tx frames" },
|
|
|
|
{ ENETC_PM_TFCS(0), "MAC tx fcs errors" },
|
|
|
|
{ ENETC_PM_TVLAN(0), "MAC tx VLAN frames" },
|
|
|
|
{ ENETC_PM_TERR(0), "MAC tx frame errors" },
|
|
|
|
{ ENETC_PM_TUCA(0), "MAC tx unicast frames" },
|
|
|
|
{ ENETC_PM_TMCA(0), "MAC tx multicast frames" },
|
|
|
|
{ ENETC_PM_TBCA(0), "MAC tx broadcast frames" },
|
|
|
|
{ ENETC_PM_TPKT(0), "MAC tx packets" },
|
|
|
|
{ ENETC_PM_TUND(0), "MAC tx undersized packets" },
|
|
|
|
{ ENETC_PM_T64(0), "MAC tx 64 byte packets" },
|
|
|
|
{ ENETC_PM_T127(0), "MAC tx 65-127 byte packets" },
|
|
|
|
{ ENETC_PM_T255(0), "MAC tx 128-255 byte packets" },
|
|
|
|
{ ENETC_PM_T511(0), "MAC tx 256-511 byte packets" },
|
|
|
|
{ ENETC_PM_T1023(0), "MAC tx 512-1023 byte packets" },
|
|
|
|
{ ENETC_PM_T1522(0), "MAC tx 1024-1522 byte packets" },
|
|
|
|
{ ENETC_PM_T1523X(0), "MAC tx 1523 to max-octet packets" },
|
|
|
|
{ ENETC_PM_TCNP(0), "MAC tx control packets" },
|
|
|
|
{ ENETC_PM_TDFR(0), "MAC tx deferred packets" },
|
|
|
|
{ ENETC_PM_TMCOL(0), "MAC tx multiple collisions" },
|
|
|
|
{ ENETC_PM_TSCOL(0), "MAC tx single collisions" },
|
|
|
|
{ ENETC_PM_TLCOL(0), "MAC tx late collisions" },
|
|
|
|
{ ENETC_PM_TECOL(0), "MAC tx excessive collisions" },
|
2025-06-27 10:11:07 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static const struct {
|
|
|
|
int reg;
|
|
|
|
char name[ETH_GSTRING_LEN] __nonstring;
|
|
|
|
} enetc_port_counters[] = {
|
2022-09-09 14:37:59 +03:00
|
|
|
{ ENETC_UFDMF, "SI MAC nomatch u-cast discards" },
|
|
|
|
{ ENETC_MFDMF, "SI MAC nomatch m-cast discards" },
|
|
|
|
{ ENETC_PBFDSIR, "SI MAC nomatch b-cast discards" },
|
|
|
|
{ ENETC_PUFDVFR, "SI VLAN nomatch u-cast discards" },
|
|
|
|
{ ENETC_PMFDVFR, "SI VLAN nomatch m-cast discards" },
|
|
|
|
{ ENETC_PBFDVFR, "SI VLAN nomatch b-cast discards" },
|
|
|
|
{ ENETC_PFDMSAPR, "SI pruning discarded frames" },
|
|
|
|
{ ENETC_PICDR(0), "ICM DR0 discarded frames" },
|
|
|
|
{ ENETC_PICDR(1), "ICM DR1 discarded frames" },
|
|
|
|
{ ENETC_PICDR(2), "ICM DR2 discarded frames" },
|
|
|
|
{ ENETC_PICDR(3), "ICM DR3 discarded frames" },
|
2019-01-22 15:29:55 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static const char rx_ring_stats[][ETH_GSTRING_LEN] = {
|
|
|
|
"Rx ring %2d frames",
|
|
|
|
"Rx ring %2d alloc errors",
|
net: enetc: add support for XDP_DROP and XDP_PASS
For the RX ring, enetc uses an allocation scheme based on pages split
into two buffers, which is already very efficient in terms of preventing
reallocations / maximizing reuse, so I see no reason why I would change
that.
+--------+--------+--------+--------+--------+--------+--------+
| | | | | | | |
| half B | half B | half B | half B | half B | half B | half B |
| | | | | | | |
+--------+--------+--------+--------+--------+--------+--------+
| | | | | | | |
| half A | half A | half A | half A | half A | half A | half A | RX ring
| | | | | | | |
+--------+--------+--------+--------+--------+--------+--------+
^ ^
| |
next_to_clean next_to_alloc
next_to_use
+--------+--------+--------+--------+--------+
| | | | | |
| half B | half B | half B | half B | half B |
| | | | | |
+--------+--------+--------+--------+--------+--------+--------+
| | | | | | | |
| half B | half B | half A | half A | half A | half A | half A | RX ring
| | | | | | | |
+--------+--------+--------+--------+--------+--------+--------+
| | | ^ ^
| half A | half A | | |
| | | next_to_clean next_to_use
+--------+--------+
^
|
next_to_alloc
then when enetc_refill_rx_ring is called, whose purpose is to advance
next_to_use, it sees that it can take buffers up to next_to_alloc, and
it says "oh, hey, rx_swbd->page isn't NULL, I don't need to allocate
one!".
The only problem is that for default PAGE_SIZE values of 4096, buffer
sizes are 2048 bytes. While this is enough for normal skb allocations at
an MTU of 1500 bytes, for XDP it isn't, because the XDP headroom is 256
bytes, and including skb_shared_info and alignment, we end up being able
to make use of only 1472 bytes, which is insufficient for the default
MTU.
To solve that problem, we implement scatter/gather processing in the
driver, because we would really like to keep the existing allocation
scheme. A packet of 1500 bytes is received in a buffer of 1472 bytes and
another one of 28 bytes.
Because the headroom required by XDP is different (and much larger) than
the one required by the network stack, whenever a BPF program is added
or deleted on the port, we drain the existing RX buffers and seed new
ones with the required headroom. We also keep the required headroom in
rx_ring->buffer_offset.
The simplest way to implement XDP_PASS, where an skb must be created, is
to create an xdp_buff based on the next_to_clean RX BDs, but not clear
those BDs from the RX ring yet, just keep the original index at which
the BDs for this frame started. Then, if the verdict is XDP_PASS,
instead of converting the xdb_buff to an skb, we replay a call to
enetc_build_skb (just as in the normal enetc_clean_rx_ring case),
starting from the original BD index.
We would also like to be minimally invasive to the regular RX data path,
and not check whether there is a BPF program attached to the ring on
every packet. So we create a separate RX ring processing function for
XDP.
Because we only install/remove the BPF program while the interface is
down, we forgo the rcu_read_lock() in enetc_clean_rx_ring, since there
shouldn't be any circumstance in which we are processing packets and
there is a potentially freed BPF program attached to the RX ring.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-31 23:08:54 +03:00
|
|
|
"Rx ring %2d XDP drops",
|
net: enetc: add support for XDP_TX
For reflecting packets back into the interface they came from, we create
an array of TX software BDs derived from the RX software BDs. Therefore,
we need to extend the TX software BD structure to contain most of the
stuff that's already present in the RX software BD structure, for
reasons that will become evident in a moment.
For a frame with the XDP_TX verdict, we don't reuse any buffer right
away as we do for XDP_DROP (the same page half) or XDP_PASS (the other
page half, same as the skb code path).
Because the buffer transfers ownership from the RX ring to the TX ring,
reusing any page half right away is very dangerous. So what we can do is
we can recycle the same page half as soon as TX is complete.
The code path is:
enetc_poll
-> enetc_clean_rx_ring_xdp
-> enetc_xdp_tx
-> enetc_refill_rx_ring
(time passes, another MSI interrupt is raised)
enetc_poll
-> enetc_clean_tx_ring
-> enetc_recycle_xdp_tx_buff
But that creates a problem, because there is a potentially large time
window between enetc_xdp_tx and enetc_recycle_xdp_tx_buff, period in
which we'll have less and less RX buffers.
Basically, when the ship starts sinking, the knee-jerk reaction is to
let enetc_refill_rx_ring do what it does for the standard skb code path
(refill every 16 consumed buffers), but that turns out to be very
inefficient. The problem is that we have no rx_swbd->page at our
disposal from the enetc_reuse_page path, so enetc_refill_rx_ring would
have to call enetc_new_page for every buffer that we refill (if we
choose to refill at this early stage). Very inefficient, it only makes
the problem worse, because page allocation is an expensive process, and
CPU time is exactly what we're lacking.
Additionally, there is an even bigger problem: if we let
enetc_refill_rx_ring top up the ring's buffers again from the RX path,
remember that the buffers sent to transmission haven't disappeared
anywhere. They will be eventually sent, and processed in
enetc_clean_tx_ring, and an attempt will be made to recycle them.
But surprise, the RX ring is already full of new buffers, because we
were premature in deciding that we should refill. So not only we took
the expensive decision of allocating new pages, but now we must throw
away perfectly good and reusable buffers.
So what we do is we implement an elastic refill mechanism, which keeps
track of the number of in-flight XDP_TX buffer descriptors. We top up
the RX ring only up to the total ring capacity minus the number of BDs
that are in flight (because we know that those BDs will return to us
eventually).
The enetc driver manages 1 RX ring per CPU, and the default TX ring
management is the same. So we do XDP_TX towards the TX ring of the same
index, because it is affined to the same CPU. This will probably not
produce great results when we have a tc-taprio/tc-mqprio qdisc on the
interface, because in that case, the number of TX rings might be
greater, but I didn't add any checks for that yet (mostly because I
didn't know what checks to add).
It should also be noted that we need to change the DMA mapping direction
for RX buffers, since they may now be reflected into the TX ring of the
same device. We choose to use DMA_BIDIRECTIONAL instead of unmapping and
remapping as DMA_TO_DEVICE, because performance is better this way.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-31 23:08:55 +03:00
|
|
|
"Rx ring %2d recycles",
|
|
|
|
"Rx ring %2d recycle failures",
|
2021-03-31 23:08:57 +03:00
|
|
|
"Rx ring %2d redirects",
|
|
|
|
"Rx ring %2d redirect failures",
|
2019-01-22 15:29:55 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static const char tx_ring_stats[][ETH_GSTRING_LEN] = {
|
|
|
|
"Tx ring %2d frames",
|
net: enetc: add support for XDP_TX
For reflecting packets back into the interface they came from, we create
an array of TX software BDs derived from the RX software BDs. Therefore,
we need to extend the TX software BD structure to contain most of the
stuff that's already present in the RX software BD structure, for
reasons that will become evident in a moment.
For a frame with the XDP_TX verdict, we don't reuse any buffer right
away as we do for XDP_DROP (the same page half) or XDP_PASS (the other
page half, same as the skb code path).
Because the buffer transfers ownership from the RX ring to the TX ring,
reusing any page half right away is very dangerous. So what we can do is
we can recycle the same page half as soon as TX is complete.
The code path is:
enetc_poll
-> enetc_clean_rx_ring_xdp
-> enetc_xdp_tx
-> enetc_refill_rx_ring
(time passes, another MSI interrupt is raised)
enetc_poll
-> enetc_clean_tx_ring
-> enetc_recycle_xdp_tx_buff
But that creates a problem, because there is a potentially large time
window between enetc_xdp_tx and enetc_recycle_xdp_tx_buff, period in
which we'll have less and less RX buffers.
Basically, when the ship starts sinking, the knee-jerk reaction is to
let enetc_refill_rx_ring do what it does for the standard skb code path
(refill every 16 consumed buffers), but that turns out to be very
inefficient. The problem is that we have no rx_swbd->page at our
disposal from the enetc_reuse_page path, so enetc_refill_rx_ring would
have to call enetc_new_page for every buffer that we refill (if we
choose to refill at this early stage). Very inefficient, it only makes
the problem worse, because page allocation is an expensive process, and
CPU time is exactly what we're lacking.
Additionally, there is an even bigger problem: if we let
enetc_refill_rx_ring top up the ring's buffers again from the RX path,
remember that the buffers sent to transmission haven't disappeared
anywhere. They will be eventually sent, and processed in
enetc_clean_tx_ring, and an attempt will be made to recycle them.
But surprise, the RX ring is already full of new buffers, because we
were premature in deciding that we should refill. So not only we took
the expensive decision of allocating new pages, but now we must throw
away perfectly good and reusable buffers.
So what we do is we implement an elastic refill mechanism, which keeps
track of the number of in-flight XDP_TX buffer descriptors. We top up
the RX ring only up to the total ring capacity minus the number of BDs
that are in flight (because we know that those BDs will return to us
eventually).
The enetc driver manages 1 RX ring per CPU, and the default TX ring
management is the same. So we do XDP_TX towards the TX ring of the same
index, because it is affined to the same CPU. This will probably not
produce great results when we have a tc-taprio/tc-mqprio qdisc on the
interface, because in that case, the number of TX rings might be
greater, but I didn't add any checks for that yet (mostly because I
didn't know what checks to add).
It should also be noted that we need to change the DMA mapping direction
for RX buffers, since they may now be reflected into the TX ring of the
same device. We choose to use DMA_BIDIRECTIONAL instead of unmapping and
remapping as DMA_TO_DEVICE, because performance is better this way.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-31 23:08:55 +03:00
|
|
|
"Tx ring %2d XDP frames",
|
|
|
|
"Tx ring %2d XDP drops",
|
2022-05-10 19:36:15 +03:00
|
|
|
"Tx window drop %2d frames",
|
2019-01-22 15:29:55 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static int enetc_get_sset_count(struct net_device *ndev, int sset)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
2020-03-10 14:51:22 +02:00
|
|
|
int len;
|
|
|
|
|
|
|
|
if (sset != ETH_SS_STATS)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
len = ARRAY_SIZE(enetc_si_counters) +
|
|
|
|
ARRAY_SIZE(tx_ring_stats) * priv->num_tx_rings +
|
|
|
|
ARRAY_SIZE(rx_ring_stats) * priv->num_rx_rings;
|
2019-01-22 15:29:55 +02:00
|
|
|
|
2020-03-10 14:51:22 +02:00
|
|
|
if (!enetc_si_is_pf(priv->si))
|
|
|
|
return len;
|
2019-01-22 15:29:55 +02:00
|
|
|
|
2020-03-10 14:51:22 +02:00
|
|
|
len += ARRAY_SIZE(enetc_port_counters);
|
2025-06-27 10:11:07 +08:00
|
|
|
len += ARRAY_SIZE(enetc_pm_counters);
|
2020-03-10 14:51:22 +02:00
|
|
|
|
|
|
|
return len;
|
2019-01-22 15:29:55 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void enetc_get_strings(struct net_device *ndev, u32 stringset, u8 *data)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
int i, j;
|
|
|
|
|
|
|
|
switch (stringset) {
|
|
|
|
case ETH_SS_STATS:
|
2024-10-25 13:37:57 -07:00
|
|
|
for (i = 0; i < ARRAY_SIZE(enetc_si_counters); i++)
|
|
|
|
ethtool_puts(&data, enetc_si_counters[i].name);
|
|
|
|
for (i = 0; i < priv->num_tx_rings; i++)
|
|
|
|
for (j = 0; j < ARRAY_SIZE(tx_ring_stats); j++)
|
|
|
|
ethtool_sprintf(&data, tx_ring_stats[j], i);
|
|
|
|
for (i = 0; i < priv->num_rx_rings; i++)
|
|
|
|
for (j = 0; j < ARRAY_SIZE(rx_ring_stats); j++)
|
|
|
|
ethtool_sprintf(&data, rx_ring_stats[j], i);
|
2019-01-22 15:29:55 +02:00
|
|
|
|
|
|
|
if (!enetc_si_is_pf(priv->si))
|
|
|
|
break;
|
|
|
|
|
2024-10-25 13:37:57 -07:00
|
|
|
for (i = 0; i < ARRAY_SIZE(enetc_port_counters); i++)
|
net: ethtool: Adjust exactly ETH_GSTRING_LEN-long stats to use memcpy
Many drivers populate the stats buffer using C-String based APIs (e.g.
ethtool_sprintf() and ethtool_puts()), usually when building up the
list of stats individually (i.e. with a for() loop). This, however,
requires that the source strings be populated in such a way as to have
a terminating NUL byte in the source.
Other drivers populate the stats buffer directly using one big memcpy()
of an entire array of strings. No NUL termination is needed here, as the
bytes are being directly passed through. Yet others will build up the
stats buffer individually, but also use memcpy(). This, too, does not
need NUL termination of the source strings.
However, there are cases where the strings that populate the
source stats strings are exactly ETH_GSTRING_LEN long, and GCC
15's -Wunterminated-string-initialization option complains that the
trailing NUL byte has been truncated. This situation is fine only if the
driver is using the memcpy() approach. If the C-String APIs are used,
the destination string name will have its final byte truncated by the
required trailing NUL byte applied by the C-string API.
For drivers that are already using memcpy() but have initializers that
truncate the NUL terminator, mark their source strings as __nonstring to
silence the GCC warnings.
For drivers that have initializers that truncate the NUL terminator and
are using the C-String APIs, switch to memcpy() to avoid destination
string truncation and mark their source strings as __nonstring to silence
the GCC warnings. (Also introduce ethtool_cpy() as a helper to make this
an easy replacement).
Specifically the following warnings were investigated and addressed:
../drivers/net/ethernet/chelsio/cxgb/cxgb2.c:364:9: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
364 | "TxFramesAbortedDueToXSCollisions",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ethernet/freescale/enetc/enetc_ethtool.c:165:33: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
165 | { ENETC_PM_R1523X(0), "MAC rx 1523 to max-octet packets" },
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ethernet/freescale/enetc/enetc_ethtool.c:190:33: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
190 | { ENETC_PM_T1523X(0), "MAC tx 1523 to max-octet packets" },
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ethernet/google/gve/gve_ethtool.c:76:9: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
76 | "adminq_dcfg_device_resources_cnt", "adminq_set_driver_parameter_cnt",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c:117:53: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
117 | STMMAC_STAT(ptp_rx_msg_type_pdelay_follow_up),
| ^
../drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c:46:12: note: in definition of macro 'STMMAC_STAT'
46 | { #m, sizeof_field(struct stmmac_extra_stats, m), \
| ^
../drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c:328:24: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
328 | .str = "a_mac_control_frames_transmitted",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c:340:24: warning: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (33 chars into 32 available) [-Wunterminated-string-initialization]
340 | .str = "a_pause_mac_ctrl_frames_received",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Signed-off-by: Kees Cook <kees@kernel.org>
Reviewed-by: Petr Machata <petrm@nvidia.com> # for mlxsw
Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com>
Link: https://patch.msgid.link/20250416010210.work.904-kees@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-15 18:02:15 -07:00
|
|
|
ethtool_cpy(&data, enetc_port_counters[i].name);
|
2024-10-25 13:37:57 -07:00
|
|
|
|
2025-06-27 10:11:07 +08:00
|
|
|
for (i = 0; i < ARRAY_SIZE(enetc_pm_counters); i++)
|
|
|
|
ethtool_cpy(&data, enetc_pm_counters[i].name);
|
|
|
|
|
2019-01-22 15:29:55 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void enetc_get_ethtool_stats(struct net_device *ndev,
|
|
|
|
struct ethtool_stats *stats, u64 *data)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
|
|
|
int i, o = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(enetc_si_counters); i++)
|
|
|
|
data[o++] = enetc_rd64(hw, enetc_si_counters[i].reg);
|
|
|
|
|
net: enetc: add support for XDP_TX
For reflecting packets back into the interface they came from, we create
an array of TX software BDs derived from the RX software BDs. Therefore,
we need to extend the TX software BD structure to contain most of the
stuff that's already present in the RX software BD structure, for
reasons that will become evident in a moment.
For a frame with the XDP_TX verdict, we don't reuse any buffer right
away as we do for XDP_DROP (the same page half) or XDP_PASS (the other
page half, same as the skb code path).
Because the buffer transfers ownership from the RX ring to the TX ring,
reusing any page half right away is very dangerous. So what we can do is
we can recycle the same page half as soon as TX is complete.
The code path is:
enetc_poll
-> enetc_clean_rx_ring_xdp
-> enetc_xdp_tx
-> enetc_refill_rx_ring
(time passes, another MSI interrupt is raised)
enetc_poll
-> enetc_clean_tx_ring
-> enetc_recycle_xdp_tx_buff
But that creates a problem, because there is a potentially large time
window between enetc_xdp_tx and enetc_recycle_xdp_tx_buff, period in
which we'll have less and less RX buffers.
Basically, when the ship starts sinking, the knee-jerk reaction is to
let enetc_refill_rx_ring do what it does for the standard skb code path
(refill every 16 consumed buffers), but that turns out to be very
inefficient. The problem is that we have no rx_swbd->page at our
disposal from the enetc_reuse_page path, so enetc_refill_rx_ring would
have to call enetc_new_page for every buffer that we refill (if we
choose to refill at this early stage). Very inefficient, it only makes
the problem worse, because page allocation is an expensive process, and
CPU time is exactly what we're lacking.
Additionally, there is an even bigger problem: if we let
enetc_refill_rx_ring top up the ring's buffers again from the RX path,
remember that the buffers sent to transmission haven't disappeared
anywhere. They will be eventually sent, and processed in
enetc_clean_tx_ring, and an attempt will be made to recycle them.
But surprise, the RX ring is already full of new buffers, because we
were premature in deciding that we should refill. So not only we took
the expensive decision of allocating new pages, but now we must throw
away perfectly good and reusable buffers.
So what we do is we implement an elastic refill mechanism, which keeps
track of the number of in-flight XDP_TX buffer descriptors. We top up
the RX ring only up to the total ring capacity minus the number of BDs
that are in flight (because we know that those BDs will return to us
eventually).
The enetc driver manages 1 RX ring per CPU, and the default TX ring
management is the same. So we do XDP_TX towards the TX ring of the same
index, because it is affined to the same CPU. This will probably not
produce great results when we have a tc-taprio/tc-mqprio qdisc on the
interface, because in that case, the number of TX rings might be
greater, but I didn't add any checks for that yet (mostly because I
didn't know what checks to add).
It should also be noted that we need to change the DMA mapping direction
for RX buffers, since they may now be reflected into the TX ring of the
same device. We choose to use DMA_BIDIRECTIONAL instead of unmapping and
remapping as DMA_TO_DEVICE, because performance is better this way.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-31 23:08:55 +03:00
|
|
|
for (i = 0; i < priv->num_tx_rings; i++) {
|
2019-01-22 15:29:55 +02:00
|
|
|
data[o++] = priv->tx_ring[i]->stats.packets;
|
net: enetc: add support for XDP_TX
For reflecting packets back into the interface they came from, we create
an array of TX software BDs derived from the RX software BDs. Therefore,
we need to extend the TX software BD structure to contain most of the
stuff that's already present in the RX software BD structure, for
reasons that will become evident in a moment.
For a frame with the XDP_TX verdict, we don't reuse any buffer right
away as we do for XDP_DROP (the same page half) or XDP_PASS (the other
page half, same as the skb code path).
Because the buffer transfers ownership from the RX ring to the TX ring,
reusing any page half right away is very dangerous. So what we can do is
we can recycle the same page half as soon as TX is complete.
The code path is:
enetc_poll
-> enetc_clean_rx_ring_xdp
-> enetc_xdp_tx
-> enetc_refill_rx_ring
(time passes, another MSI interrupt is raised)
enetc_poll
-> enetc_clean_tx_ring
-> enetc_recycle_xdp_tx_buff
But that creates a problem, because there is a potentially large time
window between enetc_xdp_tx and enetc_recycle_xdp_tx_buff, period in
which we'll have less and less RX buffers.
Basically, when the ship starts sinking, the knee-jerk reaction is to
let enetc_refill_rx_ring do what it does for the standard skb code path
(refill every 16 consumed buffers), but that turns out to be very
inefficient. The problem is that we have no rx_swbd->page at our
disposal from the enetc_reuse_page path, so enetc_refill_rx_ring would
have to call enetc_new_page for every buffer that we refill (if we
choose to refill at this early stage). Very inefficient, it only makes
the problem worse, because page allocation is an expensive process, and
CPU time is exactly what we're lacking.
Additionally, there is an even bigger problem: if we let
enetc_refill_rx_ring top up the ring's buffers again from the RX path,
remember that the buffers sent to transmission haven't disappeared
anywhere. They will be eventually sent, and processed in
enetc_clean_tx_ring, and an attempt will be made to recycle them.
But surprise, the RX ring is already full of new buffers, because we
were premature in deciding that we should refill. So not only we took
the expensive decision of allocating new pages, but now we must throw
away perfectly good and reusable buffers.
So what we do is we implement an elastic refill mechanism, which keeps
track of the number of in-flight XDP_TX buffer descriptors. We top up
the RX ring only up to the total ring capacity minus the number of BDs
that are in flight (because we know that those BDs will return to us
eventually).
The enetc driver manages 1 RX ring per CPU, and the default TX ring
management is the same. So we do XDP_TX towards the TX ring of the same
index, because it is affined to the same CPU. This will probably not
produce great results when we have a tc-taprio/tc-mqprio qdisc on the
interface, because in that case, the number of TX rings might be
greater, but I didn't add any checks for that yet (mostly because I
didn't know what checks to add).
It should also be noted that we need to change the DMA mapping direction
for RX buffers, since they may now be reflected into the TX ring of the
same device. We choose to use DMA_BIDIRECTIONAL instead of unmapping and
remapping as DMA_TO_DEVICE, because performance is better this way.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-31 23:08:55 +03:00
|
|
|
data[o++] = priv->tx_ring[i]->stats.xdp_tx;
|
|
|
|
data[o++] = priv->tx_ring[i]->stats.xdp_tx_drops;
|
2022-05-10 19:36:15 +03:00
|
|
|
data[o++] = priv->tx_ring[i]->stats.win_drop;
|
net: enetc: add support for XDP_TX
For reflecting packets back into the interface they came from, we create
an array of TX software BDs derived from the RX software BDs. Therefore,
we need to extend the TX software BD structure to contain most of the
stuff that's already present in the RX software BD structure, for
reasons that will become evident in a moment.
For a frame with the XDP_TX verdict, we don't reuse any buffer right
away as we do for XDP_DROP (the same page half) or XDP_PASS (the other
page half, same as the skb code path).
Because the buffer transfers ownership from the RX ring to the TX ring,
reusing any page half right away is very dangerous. So what we can do is
we can recycle the same page half as soon as TX is complete.
The code path is:
enetc_poll
-> enetc_clean_rx_ring_xdp
-> enetc_xdp_tx
-> enetc_refill_rx_ring
(time passes, another MSI interrupt is raised)
enetc_poll
-> enetc_clean_tx_ring
-> enetc_recycle_xdp_tx_buff
But that creates a problem, because there is a potentially large time
window between enetc_xdp_tx and enetc_recycle_xdp_tx_buff, period in
which we'll have less and less RX buffers.
Basically, when the ship starts sinking, the knee-jerk reaction is to
let enetc_refill_rx_ring do what it does for the standard skb code path
(refill every 16 consumed buffers), but that turns out to be very
inefficient. The problem is that we have no rx_swbd->page at our
disposal from the enetc_reuse_page path, so enetc_refill_rx_ring would
have to call enetc_new_page for every buffer that we refill (if we
choose to refill at this early stage). Very inefficient, it only makes
the problem worse, because page allocation is an expensive process, and
CPU time is exactly what we're lacking.
Additionally, there is an even bigger problem: if we let
enetc_refill_rx_ring top up the ring's buffers again from the RX path,
remember that the buffers sent to transmission haven't disappeared
anywhere. They will be eventually sent, and processed in
enetc_clean_tx_ring, and an attempt will be made to recycle them.
But surprise, the RX ring is already full of new buffers, because we
were premature in deciding that we should refill. So not only we took
the expensive decision of allocating new pages, but now we must throw
away perfectly good and reusable buffers.
So what we do is we implement an elastic refill mechanism, which keeps
track of the number of in-flight XDP_TX buffer descriptors. We top up
the RX ring only up to the total ring capacity minus the number of BDs
that are in flight (because we know that those BDs will return to us
eventually).
The enetc driver manages 1 RX ring per CPU, and the default TX ring
management is the same. So we do XDP_TX towards the TX ring of the same
index, because it is affined to the same CPU. This will probably not
produce great results when we have a tc-taprio/tc-mqprio qdisc on the
interface, because in that case, the number of TX rings might be
greater, but I didn't add any checks for that yet (mostly because I
didn't know what checks to add).
It should also be noted that we need to change the DMA mapping direction
for RX buffers, since they may now be reflected into the TX ring of the
same device. We choose to use DMA_BIDIRECTIONAL instead of unmapping and
remapping as DMA_TO_DEVICE, because performance is better this way.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-31 23:08:55 +03:00
|
|
|
}
|
2019-01-22 15:29:55 +02:00
|
|
|
|
|
|
|
for (i = 0; i < priv->num_rx_rings; i++) {
|
|
|
|
data[o++] = priv->rx_ring[i]->stats.packets;
|
|
|
|
data[o++] = priv->rx_ring[i]->stats.rx_alloc_errs;
|
net: enetc: add support for XDP_DROP and XDP_PASS
For the RX ring, enetc uses an allocation scheme based on pages split
into two buffers, which is already very efficient in terms of preventing
reallocations / maximizing reuse, so I see no reason why I would change
that.
+--------+--------+--------+--------+--------+--------+--------+
| | | | | | | |
| half B | half B | half B | half B | half B | half B | half B |
| | | | | | | |
+--------+--------+--------+--------+--------+--------+--------+
| | | | | | | |
| half A | half A | half A | half A | half A | half A | half A | RX ring
| | | | | | | |
+--------+--------+--------+--------+--------+--------+--------+
^ ^
| |
next_to_clean next_to_alloc
next_to_use
+--------+--------+--------+--------+--------+
| | | | | |
| half B | half B | half B | half B | half B |
| | | | | |
+--------+--------+--------+--------+--------+--------+--------+
| | | | | | | |
| half B | half B | half A | half A | half A | half A | half A | RX ring
| | | | | | | |
+--------+--------+--------+--------+--------+--------+--------+
| | | ^ ^
| half A | half A | | |
| | | next_to_clean next_to_use
+--------+--------+
^
|
next_to_alloc
then when enetc_refill_rx_ring is called, whose purpose is to advance
next_to_use, it sees that it can take buffers up to next_to_alloc, and
it says "oh, hey, rx_swbd->page isn't NULL, I don't need to allocate
one!".
The only problem is that for default PAGE_SIZE values of 4096, buffer
sizes are 2048 bytes. While this is enough for normal skb allocations at
an MTU of 1500 bytes, for XDP it isn't, because the XDP headroom is 256
bytes, and including skb_shared_info and alignment, we end up being able
to make use of only 1472 bytes, which is insufficient for the default
MTU.
To solve that problem, we implement scatter/gather processing in the
driver, because we would really like to keep the existing allocation
scheme. A packet of 1500 bytes is received in a buffer of 1472 bytes and
another one of 28 bytes.
Because the headroom required by XDP is different (and much larger) than
the one required by the network stack, whenever a BPF program is added
or deleted on the port, we drain the existing RX buffers and seed new
ones with the required headroom. We also keep the required headroom in
rx_ring->buffer_offset.
The simplest way to implement XDP_PASS, where an skb must be created, is
to create an xdp_buff based on the next_to_clean RX BDs, but not clear
those BDs from the RX ring yet, just keep the original index at which
the BDs for this frame started. Then, if the verdict is XDP_PASS,
instead of converting the xdb_buff to an skb, we replay a call to
enetc_build_skb (just as in the normal enetc_clean_rx_ring case),
starting from the original BD index.
We would also like to be minimally invasive to the regular RX data path,
and not check whether there is a BPF program attached to the ring on
every packet. So we create a separate RX ring processing function for
XDP.
Because we only install/remove the BPF program while the interface is
down, we forgo the rcu_read_lock() in enetc_clean_rx_ring, since there
shouldn't be any circumstance in which we are processing packets and
there is a potentially freed BPF program attached to the RX ring.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-31 23:08:54 +03:00
|
|
|
data[o++] = priv->rx_ring[i]->stats.xdp_drops;
|
net: enetc: add support for XDP_TX
For reflecting packets back into the interface they came from, we create
an array of TX software BDs derived from the RX software BDs. Therefore,
we need to extend the TX software BD structure to contain most of the
stuff that's already present in the RX software BD structure, for
reasons that will become evident in a moment.
For a frame with the XDP_TX verdict, we don't reuse any buffer right
away as we do for XDP_DROP (the same page half) or XDP_PASS (the other
page half, same as the skb code path).
Because the buffer transfers ownership from the RX ring to the TX ring,
reusing any page half right away is very dangerous. So what we can do is
we can recycle the same page half as soon as TX is complete.
The code path is:
enetc_poll
-> enetc_clean_rx_ring_xdp
-> enetc_xdp_tx
-> enetc_refill_rx_ring
(time passes, another MSI interrupt is raised)
enetc_poll
-> enetc_clean_tx_ring
-> enetc_recycle_xdp_tx_buff
But that creates a problem, because there is a potentially large time
window between enetc_xdp_tx and enetc_recycle_xdp_tx_buff, period in
which we'll have less and less RX buffers.
Basically, when the ship starts sinking, the knee-jerk reaction is to
let enetc_refill_rx_ring do what it does for the standard skb code path
(refill every 16 consumed buffers), but that turns out to be very
inefficient. The problem is that we have no rx_swbd->page at our
disposal from the enetc_reuse_page path, so enetc_refill_rx_ring would
have to call enetc_new_page for every buffer that we refill (if we
choose to refill at this early stage). Very inefficient, it only makes
the problem worse, because page allocation is an expensive process, and
CPU time is exactly what we're lacking.
Additionally, there is an even bigger problem: if we let
enetc_refill_rx_ring top up the ring's buffers again from the RX path,
remember that the buffers sent to transmission haven't disappeared
anywhere. They will be eventually sent, and processed in
enetc_clean_tx_ring, and an attempt will be made to recycle them.
But surprise, the RX ring is already full of new buffers, because we
were premature in deciding that we should refill. So not only we took
the expensive decision of allocating new pages, but now we must throw
away perfectly good and reusable buffers.
So what we do is we implement an elastic refill mechanism, which keeps
track of the number of in-flight XDP_TX buffer descriptors. We top up
the RX ring only up to the total ring capacity minus the number of BDs
that are in flight (because we know that those BDs will return to us
eventually).
The enetc driver manages 1 RX ring per CPU, and the default TX ring
management is the same. So we do XDP_TX towards the TX ring of the same
index, because it is affined to the same CPU. This will probably not
produce great results when we have a tc-taprio/tc-mqprio qdisc on the
interface, because in that case, the number of TX rings might be
greater, but I didn't add any checks for that yet (mostly because I
didn't know what checks to add).
It should also be noted that we need to change the DMA mapping direction
for RX buffers, since they may now be reflected into the TX ring of the
same device. We choose to use DMA_BIDIRECTIONAL instead of unmapping and
remapping as DMA_TO_DEVICE, because performance is better this way.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-31 23:08:55 +03:00
|
|
|
data[o++] = priv->rx_ring[i]->stats.recycles;
|
|
|
|
data[o++] = priv->rx_ring[i]->stats.recycle_failures;
|
2021-03-31 23:08:57 +03:00
|
|
|
data[o++] = priv->rx_ring[i]->stats.xdp_redirect;
|
|
|
|
data[o++] = priv->rx_ring[i]->stats.xdp_redirect_failures;
|
2019-01-22 15:29:55 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!enetc_si_is_pf(priv->si))
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(enetc_port_counters); i++)
|
|
|
|
data[o++] = enetc_port_rd(hw, enetc_port_counters[i].reg);
|
2025-06-27 10:11:07 +08:00
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(enetc_pm_counters); i++)
|
|
|
|
data[o++] = enetc_port_rd64(hw, enetc_pm_counters[i].reg);
|
2019-01-22 15:29:55 +02:00
|
|
|
}
|
|
|
|
|
2023-02-06 11:45:31 +02:00
|
|
|
static void enetc_pause_stats(struct enetc_hw *hw, int mac,
|
|
|
|
struct ethtool_pause_stats *pause_stats)
|
|
|
|
{
|
2025-06-27 10:11:08 +08:00
|
|
|
pause_stats->tx_pause_frames = enetc_port_rd64(hw, ENETC_PM_TXPF(mac));
|
|
|
|
pause_stats->rx_pause_frames = enetc_port_rd64(hw, ENETC_PM_RXPF(mac));
|
2023-02-06 11:45:31 +02:00
|
|
|
}
|
|
|
|
|
2022-09-09 14:38:00 +03:00
|
|
|
static void enetc_get_pause_stats(struct net_device *ndev,
|
|
|
|
struct ethtool_pause_stats *pause_stats)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
2023-02-06 11:45:31 +02:00
|
|
|
struct enetc_si *si = priv->si;
|
2022-09-09 14:38:00 +03:00
|
|
|
|
2023-02-06 11:45:31 +02:00
|
|
|
switch (pause_stats->src) {
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_EMAC:
|
|
|
|
enetc_pause_stats(hw, 0, pause_stats);
|
|
|
|
break;
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_PMAC:
|
|
|
|
if (si->hw_features & ENETC_SI_F_QBU)
|
|
|
|
enetc_pause_stats(hw, 1, pause_stats);
|
|
|
|
break;
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
|
|
|
|
ethtool_aggregate_pause_stats(ndev, pause_stats);
|
|
|
|
break;
|
|
|
|
}
|
2022-09-09 14:38:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void enetc_mac_stats(struct enetc_hw *hw, int mac,
|
|
|
|
struct ethtool_eth_mac_stats *s)
|
|
|
|
{
|
2025-06-27 10:11:08 +08:00
|
|
|
s->FramesTransmittedOK = enetc_port_rd64(hw, ENETC_PM_TFRM(mac));
|
|
|
|
s->SingleCollisionFrames = enetc_port_rd64(hw, ENETC_PM_TSCOL(mac));
|
|
|
|
s->MultipleCollisionFrames = enetc_port_rd64(hw, ENETC_PM_TMCOL(mac));
|
|
|
|
s->FramesReceivedOK = enetc_port_rd64(hw, ENETC_PM_RFRM(mac));
|
|
|
|
s->FrameCheckSequenceErrors = enetc_port_rd64(hw, ENETC_PM_RFCS(mac));
|
|
|
|
s->AlignmentErrors = enetc_port_rd64(hw, ENETC_PM_RALN(mac));
|
|
|
|
s->OctetsTransmittedOK = enetc_port_rd64(hw, ENETC_PM_TEOCT(mac));
|
|
|
|
s->FramesWithDeferredXmissions = enetc_port_rd64(hw, ENETC_PM_TDFR(mac));
|
|
|
|
s->LateCollisions = enetc_port_rd64(hw, ENETC_PM_TLCOL(mac));
|
|
|
|
s->FramesAbortedDueToXSColls = enetc_port_rd64(hw, ENETC_PM_TECOL(mac));
|
|
|
|
s->FramesLostDueToIntMACXmitError = enetc_port_rd64(hw, ENETC_PM_TERR(mac));
|
|
|
|
s->CarrierSenseErrors = enetc_port_rd64(hw, ENETC_PM_TCRSE(mac));
|
|
|
|
s->OctetsReceivedOK = enetc_port_rd64(hw, ENETC_PM_REOCT(mac));
|
|
|
|
s->FramesLostDueToIntMACRcvError = enetc_port_rd64(hw, ENETC_PM_RDRNTP(mac));
|
|
|
|
s->MulticastFramesXmittedOK = enetc_port_rd64(hw, ENETC_PM_TMCA(mac));
|
|
|
|
s->BroadcastFramesXmittedOK = enetc_port_rd64(hw, ENETC_PM_TBCA(mac));
|
|
|
|
s->MulticastFramesReceivedOK = enetc_port_rd64(hw, ENETC_PM_RMCA(mac));
|
|
|
|
s->BroadcastFramesReceivedOK = enetc_port_rd64(hw, ENETC_PM_RBCA(mac));
|
2022-09-09 14:38:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void enetc_ctrl_stats(struct enetc_hw *hw, int mac,
|
|
|
|
struct ethtool_eth_ctrl_stats *s)
|
|
|
|
{
|
2025-06-27 10:11:08 +08:00
|
|
|
s->MACControlFramesTransmitted = enetc_port_rd64(hw, ENETC_PM_TCNP(mac));
|
|
|
|
s->MACControlFramesReceived = enetc_port_rd64(hw, ENETC_PM_RCNP(mac));
|
2022-09-09 14:38:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct ethtool_rmon_hist_range enetc_rmon_ranges[] = {
|
|
|
|
{ 64, 64 },
|
|
|
|
{ 65, 127 },
|
|
|
|
{ 128, 255 },
|
|
|
|
{ 256, 511 },
|
|
|
|
{ 512, 1023 },
|
|
|
|
{ 1024, 1522 },
|
|
|
|
{ 1523, ENETC_MAC_MAXFRM_SIZE },
|
|
|
|
{},
|
|
|
|
};
|
|
|
|
|
|
|
|
static void enetc_rmon_stats(struct enetc_hw *hw, int mac,
|
2023-03-22 01:28:31 +02:00
|
|
|
struct ethtool_rmon_stats *s)
|
2022-09-09 14:38:00 +03:00
|
|
|
{
|
2025-06-27 10:11:08 +08:00
|
|
|
s->undersize_pkts = enetc_port_rd64(hw, ENETC_PM_RUND(mac));
|
|
|
|
s->oversize_pkts = enetc_port_rd64(hw, ENETC_PM_ROVR(mac));
|
|
|
|
s->fragments = enetc_port_rd64(hw, ENETC_PM_RFRG(mac));
|
|
|
|
s->jabbers = enetc_port_rd64(hw, ENETC_PM_RJBR(mac));
|
|
|
|
|
|
|
|
s->hist[0] = enetc_port_rd64(hw, ENETC_PM_R64(mac));
|
|
|
|
s->hist[1] = enetc_port_rd64(hw, ENETC_PM_R127(mac));
|
|
|
|
s->hist[2] = enetc_port_rd64(hw, ENETC_PM_R255(mac));
|
|
|
|
s->hist[3] = enetc_port_rd64(hw, ENETC_PM_R511(mac));
|
|
|
|
s->hist[4] = enetc_port_rd64(hw, ENETC_PM_R1023(mac));
|
|
|
|
s->hist[5] = enetc_port_rd64(hw, ENETC_PM_R1522(mac));
|
|
|
|
s->hist[6] = enetc_port_rd64(hw, ENETC_PM_R1523X(mac));
|
|
|
|
|
|
|
|
s->hist_tx[0] = enetc_port_rd64(hw, ENETC_PM_T64(mac));
|
|
|
|
s->hist_tx[1] = enetc_port_rd64(hw, ENETC_PM_T127(mac));
|
|
|
|
s->hist_tx[2] = enetc_port_rd64(hw, ENETC_PM_T255(mac));
|
|
|
|
s->hist_tx[3] = enetc_port_rd64(hw, ENETC_PM_T511(mac));
|
|
|
|
s->hist_tx[4] = enetc_port_rd64(hw, ENETC_PM_T1023(mac));
|
|
|
|
s->hist_tx[5] = enetc_port_rd64(hw, ENETC_PM_T1522(mac));
|
|
|
|
s->hist_tx[6] = enetc_port_rd64(hw, ENETC_PM_T1523X(mac));
|
2022-09-09 14:38:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void enetc_get_eth_mac_stats(struct net_device *ndev,
|
|
|
|
struct ethtool_eth_mac_stats *mac_stats)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
2023-02-06 11:45:31 +02:00
|
|
|
struct enetc_si *si = priv->si;
|
2022-09-09 14:38:00 +03:00
|
|
|
|
2023-02-06 11:45:31 +02:00
|
|
|
switch (mac_stats->src) {
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_EMAC:
|
|
|
|
enetc_mac_stats(hw, 0, mac_stats);
|
|
|
|
break;
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_PMAC:
|
|
|
|
if (si->hw_features & ENETC_SI_F_QBU)
|
|
|
|
enetc_mac_stats(hw, 1, mac_stats);
|
|
|
|
break;
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
|
|
|
|
ethtool_aggregate_mac_stats(ndev, mac_stats);
|
|
|
|
break;
|
|
|
|
}
|
2022-09-09 14:38:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void enetc_get_eth_ctrl_stats(struct net_device *ndev,
|
|
|
|
struct ethtool_eth_ctrl_stats *ctrl_stats)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
2023-02-06 11:45:31 +02:00
|
|
|
struct enetc_si *si = priv->si;
|
2022-09-09 14:38:00 +03:00
|
|
|
|
2023-02-06 11:45:31 +02:00
|
|
|
switch (ctrl_stats->src) {
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_EMAC:
|
|
|
|
enetc_ctrl_stats(hw, 0, ctrl_stats);
|
|
|
|
break;
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_PMAC:
|
|
|
|
if (si->hw_features & ENETC_SI_F_QBU)
|
|
|
|
enetc_ctrl_stats(hw, 1, ctrl_stats);
|
|
|
|
break;
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
|
|
|
|
ethtool_aggregate_ctrl_stats(ndev, ctrl_stats);
|
|
|
|
break;
|
|
|
|
}
|
2022-09-09 14:38:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void enetc_get_rmon_stats(struct net_device *ndev,
|
|
|
|
struct ethtool_rmon_stats *rmon_stats,
|
|
|
|
const struct ethtool_rmon_hist_range **ranges)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
2023-02-06 11:45:31 +02:00
|
|
|
struct enetc_si *si = priv->si;
|
2022-09-09 14:38:00 +03:00
|
|
|
|
2023-03-22 01:28:31 +02:00
|
|
|
*ranges = enetc_rmon_ranges;
|
|
|
|
|
2023-02-06 11:45:31 +02:00
|
|
|
switch (rmon_stats->src) {
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_EMAC:
|
2023-03-22 01:28:31 +02:00
|
|
|
enetc_rmon_stats(hw, 0, rmon_stats);
|
2023-02-06 11:45:31 +02:00
|
|
|
break;
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_PMAC:
|
|
|
|
if (si->hw_features & ENETC_SI_F_QBU)
|
2023-03-22 01:28:31 +02:00
|
|
|
enetc_rmon_stats(hw, 1, rmon_stats);
|
2023-02-06 11:45:31 +02:00
|
|
|
break;
|
|
|
|
case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
|
|
|
|
ethtool_aggregate_rmon_stats(ndev, rmon_stats);
|
|
|
|
break;
|
|
|
|
}
|
2022-09-09 14:38:00 +03:00
|
|
|
}
|
|
|
|
|
2019-01-22 15:29:57 +02:00
|
|
|
#define ENETC_RSSHASH_L3 (RXH_L2DA | RXH_VLAN | RXH_L3_PROTO | RXH_IP_SRC | \
|
|
|
|
RXH_IP_DST)
|
|
|
|
#define ENETC_RSSHASH_L4 (ENETC_RSSHASH_L3 | RXH_L4_B_0_1 | RXH_L4_B_2_3)
|
2025-06-14 11:06:38 -07:00
|
|
|
static int enetc_get_rxfh_fields(struct net_device *netdev,
|
|
|
|
struct ethtool_rxfh_fields *rxnfc)
|
2019-01-22 15:29:57 +02:00
|
|
|
{
|
|
|
|
static const u32 rsshash[] = {
|
|
|
|
[TCP_V4_FLOW] = ENETC_RSSHASH_L4,
|
|
|
|
[UDP_V4_FLOW] = ENETC_RSSHASH_L4,
|
|
|
|
[SCTP_V4_FLOW] = ENETC_RSSHASH_L4,
|
|
|
|
[AH_ESP_V4_FLOW] = ENETC_RSSHASH_L3,
|
|
|
|
[IPV4_FLOW] = ENETC_RSSHASH_L3,
|
|
|
|
[TCP_V6_FLOW] = ENETC_RSSHASH_L4,
|
|
|
|
[UDP_V6_FLOW] = ENETC_RSSHASH_L4,
|
|
|
|
[SCTP_V6_FLOW] = ENETC_RSSHASH_L4,
|
|
|
|
[AH_ESP_V6_FLOW] = ENETC_RSSHASH_L3,
|
|
|
|
[IPV6_FLOW] = ENETC_RSSHASH_L3,
|
|
|
|
[ETHER_FLOW] = 0,
|
|
|
|
};
|
|
|
|
|
|
|
|
if (rxnfc->flow_type >= ARRAY_SIZE(rsshash))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
rxnfc->data = rsshash[rxnfc->flow_type];
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* current HW spec does byte reversal on everything including MAC addresses */
|
|
|
|
static void ether_addr_copy_swap(u8 *dst, const u8 *src)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ETH_ALEN; i++)
|
|
|
|
dst[i] = src[ETH_ALEN - i - 1];
|
|
|
|
}
|
|
|
|
|
|
|
|
static int enetc_set_cls_entry(struct enetc_si *si,
|
|
|
|
struct ethtool_rx_flow_spec *fs, bool en)
|
|
|
|
{
|
|
|
|
struct ethtool_tcpip4_spec *l4ip4_h, *l4ip4_m;
|
|
|
|
struct ethtool_usrip4_spec *l3ip4_h, *l3ip4_m;
|
|
|
|
struct ethhdr *eth_h, *eth_m;
|
|
|
|
struct enetc_cmd_rfse rfse = { {0} };
|
|
|
|
|
|
|
|
if (!en)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
switch (fs->flow_type & 0xff) {
|
|
|
|
case TCP_V4_FLOW:
|
|
|
|
l4ip4_h = &fs->h_u.tcp_ip4_spec;
|
|
|
|
l4ip4_m = &fs->m_u.tcp_ip4_spec;
|
|
|
|
goto l4ip4;
|
|
|
|
case UDP_V4_FLOW:
|
|
|
|
l4ip4_h = &fs->h_u.udp_ip4_spec;
|
|
|
|
l4ip4_m = &fs->m_u.udp_ip4_spec;
|
|
|
|
goto l4ip4;
|
|
|
|
case SCTP_V4_FLOW:
|
|
|
|
l4ip4_h = &fs->h_u.sctp_ip4_spec;
|
|
|
|
l4ip4_m = &fs->m_u.sctp_ip4_spec;
|
|
|
|
l4ip4:
|
|
|
|
rfse.sip_h[0] = l4ip4_h->ip4src;
|
|
|
|
rfse.sip_m[0] = l4ip4_m->ip4src;
|
|
|
|
rfse.dip_h[0] = l4ip4_h->ip4dst;
|
|
|
|
rfse.dip_m[0] = l4ip4_m->ip4dst;
|
|
|
|
rfse.sport_h = ntohs(l4ip4_h->psrc);
|
|
|
|
rfse.sport_m = ntohs(l4ip4_m->psrc);
|
|
|
|
rfse.dport_h = ntohs(l4ip4_h->pdst);
|
|
|
|
rfse.dport_m = ntohs(l4ip4_m->pdst);
|
|
|
|
if (l4ip4_m->tos)
|
|
|
|
netdev_warn(si->ndev, "ToS field is not supported and was ignored\n");
|
|
|
|
rfse.ethtype_h = ETH_P_IP; /* IPv4 */
|
|
|
|
rfse.ethtype_m = 0xffff;
|
|
|
|
break;
|
|
|
|
case IP_USER_FLOW:
|
|
|
|
l3ip4_h = &fs->h_u.usr_ip4_spec;
|
|
|
|
l3ip4_m = &fs->m_u.usr_ip4_spec;
|
|
|
|
|
|
|
|
rfse.sip_h[0] = l3ip4_h->ip4src;
|
|
|
|
rfse.sip_m[0] = l3ip4_m->ip4src;
|
|
|
|
rfse.dip_h[0] = l3ip4_h->ip4dst;
|
|
|
|
rfse.dip_m[0] = l3ip4_m->ip4dst;
|
|
|
|
if (l3ip4_m->tos)
|
|
|
|
netdev_warn(si->ndev, "ToS field is not supported and was ignored\n");
|
|
|
|
rfse.ethtype_h = ETH_P_IP; /* IPv4 */
|
|
|
|
rfse.ethtype_m = 0xffff;
|
|
|
|
break;
|
|
|
|
case ETHER_FLOW:
|
|
|
|
eth_h = &fs->h_u.ether_spec;
|
|
|
|
eth_m = &fs->m_u.ether_spec;
|
|
|
|
|
|
|
|
ether_addr_copy_swap(rfse.smac_h, eth_h->h_source);
|
|
|
|
ether_addr_copy_swap(rfse.smac_m, eth_m->h_source);
|
|
|
|
ether_addr_copy_swap(rfse.dmac_h, eth_h->h_dest);
|
|
|
|
ether_addr_copy_swap(rfse.dmac_m, eth_m->h_dest);
|
|
|
|
rfse.ethtype_h = ntohs(eth_h->h_proto);
|
|
|
|
rfse.ethtype_m = ntohs(eth_m->h_proto);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
|
|
|
rfse.mode |= ENETC_RFSE_EN;
|
|
|
|
if (fs->ring_cookie != RX_CLS_FLOW_DISC) {
|
|
|
|
rfse.mode |= ENETC_RFSE_MODE_BD;
|
|
|
|
rfse.result = fs->ring_cookie;
|
|
|
|
}
|
|
|
|
done:
|
|
|
|
return enetc_set_fs_entry(si, &rfse, fs->location);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int enetc_get_rxnfc(struct net_device *ndev, struct ethtool_rxnfc *rxnfc,
|
|
|
|
u32 *rule_locs)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
int i, j;
|
|
|
|
|
|
|
|
switch (rxnfc->cmd) {
|
|
|
|
case ETHTOOL_GRXRINGS:
|
|
|
|
rxnfc->data = priv->num_rx_rings;
|
|
|
|
break;
|
|
|
|
case ETHTOOL_GRXCLSRLCNT:
|
|
|
|
/* total number of entries */
|
|
|
|
rxnfc->data = priv->si->num_fs_entries;
|
|
|
|
/* number of entries in use */
|
|
|
|
rxnfc->rule_cnt = 0;
|
|
|
|
for (i = 0; i < priv->si->num_fs_entries; i++)
|
|
|
|
if (priv->cls_rules[i].used)
|
|
|
|
rxnfc->rule_cnt++;
|
|
|
|
break;
|
|
|
|
case ETHTOOL_GRXCLSRULE:
|
|
|
|
if (rxnfc->fs.location >= priv->si->num_fs_entries)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* get entry x */
|
|
|
|
rxnfc->fs = priv->cls_rules[rxnfc->fs.location].fs;
|
|
|
|
break;
|
|
|
|
case ETHTOOL_GRXCLSRLALL:
|
|
|
|
/* total number of entries */
|
|
|
|
rxnfc->data = priv->si->num_fs_entries;
|
|
|
|
/* array of indexes of used entries */
|
|
|
|
j = 0;
|
|
|
|
for (i = 0; i < priv->si->num_fs_entries; i++) {
|
|
|
|
if (!priv->cls_rules[i].used)
|
|
|
|
continue;
|
|
|
|
if (j == rxnfc->rule_cnt)
|
|
|
|
return -EMSGSIZE;
|
|
|
|
rule_locs[j++] = i;
|
|
|
|
}
|
|
|
|
/* number of entries in use */
|
|
|
|
rxnfc->rule_cnt = j;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2025-05-06 16:07:29 +08:00
|
|
|
/* i.MX95 ENETC does not support RFS table, but we can use ingress port
|
|
|
|
* filter table to implement Wake-on-LAN filter or drop the matched flow,
|
|
|
|
* so the implementation will be different from enetc_get_rxnfc() and
|
|
|
|
* enetc_set_rxnfc(). Therefore, add enetc4_get_rxnfc() for ENETC v4 PF.
|
|
|
|
*/
|
|
|
|
static int enetc4_get_rxnfc(struct net_device *ndev, struct ethtool_rxnfc *rxnfc,
|
|
|
|
u32 *rule_locs)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
|
|
|
|
switch (rxnfc->cmd) {
|
|
|
|
case ETHTOOL_GRXRINGS:
|
|
|
|
rxnfc->data = priv->num_rx_rings;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-01-22 15:29:57 +02:00
|
|
|
static int enetc_set_rxnfc(struct net_device *ndev, struct ethtool_rxnfc *rxnfc)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
int err;
|
|
|
|
|
|
|
|
switch (rxnfc->cmd) {
|
|
|
|
case ETHTOOL_SRXCLSRLINS:
|
|
|
|
if (rxnfc->fs.location >= priv->si->num_fs_entries)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (rxnfc->fs.ring_cookie >= priv->num_rx_rings &&
|
|
|
|
rxnfc->fs.ring_cookie != RX_CLS_FLOW_DISC)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
err = enetc_set_cls_entry(priv->si, &rxnfc->fs, true);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
priv->cls_rules[rxnfc->fs.location].fs = rxnfc->fs;
|
|
|
|
priv->cls_rules[rxnfc->fs.location].used = 1;
|
|
|
|
break;
|
|
|
|
case ETHTOOL_SRXCLSRLDEL:
|
|
|
|
if (rxnfc->fs.location >= priv->si->num_fs_entries)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
err = enetc_set_cls_entry(priv->si, &rxnfc->fs, false);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
priv->cls_rules[rxnfc->fs.location].used = 0;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static u32 enetc_get_rxfh_key_size(struct net_device *ndev)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
|
|
|
|
/* return the size of the RX flow hash key. PF only */
|
|
|
|
return (priv->si->hw.port) ? ENETC_RSSHASH_KEY_SIZE : 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static u32 enetc_get_rxfh_indir_size(struct net_device *ndev)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
|
|
|
|
/* return the size of the RX flow hash indirection table */
|
|
|
|
return priv->si->num_rss;
|
|
|
|
}
|
|
|
|
|
2025-05-06 16:07:28 +08:00
|
|
|
static int enetc_get_rss_key_base(struct enetc_si *si)
|
|
|
|
{
|
|
|
|
if (is_enetc_rev1(si))
|
|
|
|
return ENETC_PRSSK(0);
|
|
|
|
|
|
|
|
return ENETC4_PRSSKR(0);
|
|
|
|
}
|
|
|
|
|
2025-05-06 16:07:29 +08:00
|
|
|
static void enetc_get_rss_key(struct enetc_si *si, const u8 *key)
|
|
|
|
{
|
|
|
|
int base = enetc_get_rss_key_base(si);
|
|
|
|
struct enetc_hw *hw = &si->hw;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ENETC_RSSHASH_KEY_SIZE / 4; i++)
|
|
|
|
((u32 *)key)[i] = enetc_port_rd(hw, base + i * 4);
|
|
|
|
}
|
|
|
|
|
2023-12-12 17:33:14 -07:00
|
|
|
static int enetc_get_rxfh(struct net_device *ndev,
|
|
|
|
struct ethtool_rxfh_param *rxfh)
|
2019-01-22 15:29:57 +02:00
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
2025-05-06 16:07:27 +08:00
|
|
|
struct enetc_si *si = priv->si;
|
2025-05-06 16:07:29 +08:00
|
|
|
int err = 0;
|
2019-01-22 15:29:57 +02:00
|
|
|
|
|
|
|
/* return hash function */
|
2023-12-12 17:33:14 -07:00
|
|
|
rxfh->hfunc = ETH_RSS_HASH_TOP;
|
2019-01-22 15:29:57 +02:00
|
|
|
|
|
|
|
/* return hash key */
|
2025-05-06 16:07:29 +08:00
|
|
|
if (rxfh->key && enetc_si_is_pf(si))
|
|
|
|
enetc_get_rss_key(si, rxfh->key);
|
2019-01-22 15:29:57 +02:00
|
|
|
|
|
|
|
/* return RSS table */
|
2023-12-12 17:33:14 -07:00
|
|
|
if (rxfh->indir)
|
2025-05-06 16:07:27 +08:00
|
|
|
err = si->ops->get_rss_table(si, rxfh->indir, si->num_rss);
|
2019-01-22 15:29:57 +02:00
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2025-05-06 16:07:28 +08:00
|
|
|
void enetc_set_rss_key(struct enetc_si *si, const u8 *bytes)
|
2019-01-22 15:29:57 +02:00
|
|
|
{
|
2025-05-06 16:07:28 +08:00
|
|
|
int base = enetc_get_rss_key_base(si);
|
|
|
|
struct enetc_hw *hw = &si->hw;
|
2019-01-22 15:29:57 +02:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ENETC_RSSHASH_KEY_SIZE / 4; i++)
|
2025-05-06 16:07:28 +08:00
|
|
|
enetc_port_wr(hw, base + i * 4, ((u32 *)bytes)[i]);
|
2019-01-22 15:29:57 +02:00
|
|
|
}
|
2023-01-19 18:04:26 +02:00
|
|
|
EXPORT_SYMBOL_GPL(enetc_set_rss_key);
|
2019-01-22 15:29:57 +02:00
|
|
|
|
2023-12-12 17:33:14 -07:00
|
|
|
static int enetc_set_rxfh(struct net_device *ndev,
|
|
|
|
struct ethtool_rxfh_param *rxfh,
|
|
|
|
struct netlink_ext_ack *extack)
|
2019-01-22 15:29:57 +02:00
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
2025-05-06 16:07:27 +08:00
|
|
|
struct enetc_si *si = priv->si;
|
2019-01-22 15:29:57 +02:00
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
/* set hash key, if PF */
|
2025-05-06 16:07:28 +08:00
|
|
|
if (rxfh->key && enetc_si_is_pf(si))
|
|
|
|
enetc_set_rss_key(si, rxfh->key);
|
2019-01-22 15:29:57 +02:00
|
|
|
|
|
|
|
/* set RSS table */
|
2023-12-12 17:33:14 -07:00
|
|
|
if (rxfh->indir)
|
2025-05-06 16:07:27 +08:00
|
|
|
err = si->ops->set_rss_table(si, rxfh->indir, si->num_rss);
|
2019-01-22 15:29:57 +02:00
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
static void enetc_get_ringparam(struct net_device *ndev,
|
2021-11-18 20:12:43 +08:00
|
|
|
struct ethtool_ringparam *ring,
|
|
|
|
struct kernel_ethtool_ringparam *kernel_ring,
|
|
|
|
struct netlink_ext_ack *extack)
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
|
|
|
|
ring->rx_pending = priv->rx_bd_count;
|
|
|
|
ring->tx_pending = priv->tx_bd_count;
|
|
|
|
|
|
|
|
/* do some h/w sanity checks for BDR length */
|
|
|
|
if (netif_running(ndev)) {
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
|
|
|
u32 val = enetc_rxbdr_rd(hw, 0, ENETC_RBLENR);
|
|
|
|
|
|
|
|
if (val != priv->rx_bd_count)
|
|
|
|
netif_err(priv, hw, ndev, "RxBDR[RBLENR] = %d!\n", val);
|
|
|
|
|
|
|
|
val = enetc_txbdr_rd(hw, 0, ENETC_TBLENR);
|
|
|
|
|
|
|
|
if (val != priv->tx_bd_count)
|
|
|
|
netif_err(priv, hw, ndev, "TxBDR[TBLENR] = %d!\n", val);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-07-21 10:55:21 +03:00
|
|
|
static int enetc_get_coalesce(struct net_device *ndev,
|
2021-08-20 15:35:18 +08:00
|
|
|
struct ethtool_coalesce *ic,
|
|
|
|
struct kernel_ethtool_coalesce *kernel_coal,
|
|
|
|
struct netlink_ext_ack *extack)
|
2020-07-21 10:55:21 +03:00
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_int_vector *v = priv->int_vector[0];
|
2024-10-30 17:39:22 +08:00
|
|
|
u64 clk_freq = priv->sysclk_freq;
|
2020-07-21 10:55:21 +03:00
|
|
|
|
2024-10-30 17:39:22 +08:00
|
|
|
ic->tx_coalesce_usecs = enetc_cycles_to_usecs(priv->tx_ictt, clk_freq);
|
|
|
|
ic->rx_coalesce_usecs = enetc_cycles_to_usecs(v->rx_ictt, clk_freq);
|
2020-07-21 10:55:21 +03:00
|
|
|
|
|
|
|
ic->tx_max_coalesced_frames = ENETC_TXIC_PKTTHR;
|
|
|
|
ic->rx_max_coalesced_frames = ENETC_RXIC_PKTTHR;
|
|
|
|
|
2020-07-21 10:55:22 +03:00
|
|
|
ic->use_adaptive_rx_coalesce = priv->ic_mode & ENETC_IC_RX_ADAPTIVE;
|
|
|
|
|
2020-07-21 10:55:21 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int enetc_set_coalesce(struct net_device *ndev,
|
2021-08-20 15:35:18 +08:00
|
|
|
struct ethtool_coalesce *ic,
|
|
|
|
struct kernel_ethtool_coalesce *kernel_coal,
|
|
|
|
struct netlink_ext_ack *extack)
|
2020-07-21 10:55:21 +03:00
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
2024-10-30 17:39:22 +08:00
|
|
|
u64 clk_freq = priv->sysclk_freq;
|
2020-07-21 10:55:21 +03:00
|
|
|
u32 rx_ictt, tx_ictt;
|
|
|
|
int i, ic_mode;
|
|
|
|
bool changed;
|
|
|
|
|
2024-10-30 17:39:22 +08:00
|
|
|
tx_ictt = enetc_usecs_to_cycles(ic->tx_coalesce_usecs, clk_freq);
|
|
|
|
rx_ictt = enetc_usecs_to_cycles(ic->rx_coalesce_usecs, clk_freq);
|
2020-07-21 10:55:21 +03:00
|
|
|
|
|
|
|
if (ic->rx_max_coalesced_frames != ENETC_RXIC_PKTTHR)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
if (ic->tx_max_coalesced_frames != ENETC_TXIC_PKTTHR)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
ic_mode = ENETC_IC_NONE;
|
2020-07-21 10:55:22 +03:00
|
|
|
if (ic->use_adaptive_rx_coalesce) {
|
|
|
|
ic_mode |= ENETC_IC_RX_ADAPTIVE;
|
|
|
|
rx_ictt = 0x1;
|
|
|
|
} else {
|
|
|
|
ic_mode |= rx_ictt ? ENETC_IC_RX_MANUAL : 0;
|
|
|
|
}
|
|
|
|
|
2020-07-21 10:55:21 +03:00
|
|
|
ic_mode |= tx_ictt ? ENETC_IC_TX_MANUAL : 0;
|
|
|
|
|
|
|
|
/* commit the settings */
|
2020-07-21 10:55:22 +03:00
|
|
|
changed = (ic_mode != priv->ic_mode) || (priv->tx_ictt != tx_ictt);
|
2020-07-21 10:55:21 +03:00
|
|
|
|
|
|
|
priv->ic_mode = ic_mode;
|
|
|
|
priv->tx_ictt = tx_ictt;
|
|
|
|
|
|
|
|
for (i = 0; i < priv->bdr_int_num; i++) {
|
|
|
|
struct enetc_int_vector *v = priv->int_vector[i];
|
|
|
|
|
|
|
|
v->rx_ictt = rx_ictt;
|
2020-07-21 10:55:22 +03:00
|
|
|
v->rx_dim_en = !!(ic_mode & ENETC_IC_RX_ADAPTIVE);
|
2020-07-21 10:55:21 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (netif_running(ndev) && changed) {
|
|
|
|
/* reconfigure the operation mode of h/w interrupts,
|
|
|
|
* traffic needs to be paused in the process
|
|
|
|
*/
|
|
|
|
enetc_stop(ndev);
|
|
|
|
enetc_start(ndev);
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-05-23 02:33:33 +00:00
|
|
|
static int enetc_get_ts_info(struct net_device *ndev,
|
2024-07-09 15:53:38 +02:00
|
|
|
struct kernel_ethtool_ts_info *info)
|
2019-05-23 02:33:33 +00:00
|
|
|
{
|
2025-02-24 19:12:47 +08:00
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
2019-05-23 02:33:33 +00:00
|
|
|
int *phc_idx;
|
|
|
|
|
|
|
|
phc_idx = symbol_get(enetc_phc_index);
|
|
|
|
if (phc_idx) {
|
|
|
|
info->phc_index = *phc_idx;
|
|
|
|
symbol_put(enetc_phc_index);
|
|
|
|
}
|
|
|
|
|
2024-09-12 18:37:40 +01:00
|
|
|
if (!IS_ENABLED(CONFIG_FSL_ENETC_PTP_CLOCK)) {
|
|
|
|
info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-05-23 02:33:33 +00:00
|
|
|
info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
|
|
|
|
SOF_TIMESTAMPING_RX_HARDWARE |
|
2022-03-24 18:12:10 +02:00
|
|
|
SOF_TIMESTAMPING_RAW_HARDWARE |
|
2024-09-01 14:28:00 +03:00
|
|
|
SOF_TIMESTAMPING_TX_SOFTWARE;
|
2019-05-23 02:33:33 +00:00
|
|
|
|
|
|
|
info->tx_types = (1 << HWTSTAMP_TX_OFF) |
|
2025-02-24 19:12:47 +08:00
|
|
|
(1 << HWTSTAMP_TX_ON);
|
|
|
|
|
|
|
|
if (enetc_si_is_pf(priv->si))
|
|
|
|
info->tx_types |= (1 << HWTSTAMP_TX_ONESTEP_SYNC);
|
2024-09-12 18:37:40 +01:00
|
|
|
|
2019-05-23 02:33:33 +00:00
|
|
|
info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
|
|
|
|
(1 << HWTSTAMP_FILTER_ALL);
|
2024-09-12 18:37:40 +01:00
|
|
|
|
2019-05-23 02:33:33 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-11-07 09:40:00 +01:00
|
|
|
static void enetc_get_wol(struct net_device *dev,
|
|
|
|
struct ethtool_wolinfo *wol)
|
|
|
|
{
|
|
|
|
wol->supported = 0;
|
|
|
|
wol->wolopts = 0;
|
|
|
|
|
|
|
|
if (dev->phydev)
|
|
|
|
phy_ethtool_get_wol(dev->phydev, wol);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int enetc_set_wol(struct net_device *dev,
|
|
|
|
struct ethtool_wolinfo *wol)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!dev->phydev)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
ret = phy_ethtool_set_wol(dev->phydev, wol);
|
|
|
|
if (!ret)
|
|
|
|
device_set_wakeup_enable(&dev->dev, wol->wolopts);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-04-17 02:42:25 +03:00
|
|
|
static void enetc_get_pauseparam(struct net_device *dev,
|
|
|
|
struct ethtool_pauseparam *pause)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(dev);
|
|
|
|
|
|
|
|
phylink_ethtool_get_pauseparam(priv->phylink, pause);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int enetc_set_pauseparam(struct net_device *dev,
|
|
|
|
struct ethtool_pauseparam *pause)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(dev);
|
|
|
|
|
|
|
|
return phylink_ethtool_set_pauseparam(priv->phylink, pause);
|
|
|
|
}
|
|
|
|
|
2020-10-07 12:48:23 +03:00
|
|
|
static int enetc_get_link_ksettings(struct net_device *dev,
|
|
|
|
struct ethtool_link_ksettings *cmd)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(dev);
|
|
|
|
|
|
|
|
if (!priv->phylink)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return phylink_ethtool_ksettings_get(priv->phylink, cmd);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int enetc_set_link_ksettings(struct net_device *dev,
|
|
|
|
const struct ethtool_link_ksettings *cmd)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(dev);
|
|
|
|
|
|
|
|
if (!priv->phylink)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return phylink_ethtool_ksettings_set(priv->phylink, cmd);
|
|
|
|
}
|
|
|
|
|
2023-02-06 11:45:31 +02:00
|
|
|
static void enetc_get_mm_stats(struct net_device *ndev,
|
|
|
|
struct ethtool_mm_stats *s)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
|
|
|
struct enetc_si *si = priv->si;
|
|
|
|
|
|
|
|
if (!(si->hw_features & ENETC_SI_F_QBU))
|
|
|
|
return;
|
|
|
|
|
|
|
|
s->MACMergeFrameAssErrorCount = enetc_port_rd(hw, ENETC_MMFAECR);
|
|
|
|
s->MACMergeFrameSmdErrorCount = enetc_port_rd(hw, ENETC_MMFSECR);
|
|
|
|
s->MACMergeFrameAssOkCount = enetc_port_rd(hw, ENETC_MMFAOCR);
|
|
|
|
s->MACMergeFragCountRx = enetc_port_rd(hw, ENETC_MMFCRXR);
|
|
|
|
s->MACMergeFragCountTx = enetc_port_rd(hw, ENETC_MMFCTXR);
|
|
|
|
s->MACMergeHoldCount = enetc_port_rd(hw, ENETC_MMHCR);
|
|
|
|
}
|
|
|
|
|
2023-02-06 11:45:30 +02:00
|
|
|
static int enetc_get_mm(struct net_device *ndev, struct ethtool_mm_state *state)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_si *si = priv->si;
|
|
|
|
struct enetc_hw *hw = &si->hw;
|
|
|
|
u32 lafs, rafs, val;
|
|
|
|
|
|
|
|
if (!(si->hw_features & ENETC_SI_F_QBU))
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
mutex_lock(&priv->mm_lock);
|
|
|
|
|
|
|
|
val = enetc_port_rd(hw, ENETC_PFPMR);
|
|
|
|
state->pmac_enabled = !!(val & ENETC_PFPMR_PMACE);
|
|
|
|
|
|
|
|
val = enetc_port_rd(hw, ENETC_MMCSR);
|
|
|
|
|
|
|
|
switch (ENETC_MMCSR_GET_VSTS(val)) {
|
|
|
|
case 0:
|
|
|
|
state->verify_status = ETHTOOL_MM_VERIFY_STATUS_DISABLED;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
state->verify_status = ETHTOOL_MM_VERIFY_STATUS_VERIFYING;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
state->verify_status = ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
state->verify_status = ETHTOOL_MM_VERIFY_STATUS_FAILED;
|
|
|
|
break;
|
|
|
|
case 5:
|
|
|
|
default:
|
|
|
|
state->verify_status = ETHTOOL_MM_VERIFY_STATUS_UNKNOWN;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
rafs = ENETC_MMCSR_GET_RAFS(val);
|
|
|
|
state->tx_min_frag_size = ethtool_mm_frag_size_add_to_min(rafs);
|
|
|
|
lafs = ENETC_MMCSR_GET_LAFS(val);
|
|
|
|
state->rx_min_frag_size = ethtool_mm_frag_size_add_to_min(lafs);
|
|
|
|
state->tx_enabled = !!(val & ENETC_MMCSR_LPE); /* mirror of MMCSR_ME */
|
net: enetc: report mm tx-active based on tx-enabled and verify-status
The MMCSR register contains 2 fields with overlapping meaning:
- LPA (Local preemption active):
This read-only status bit indicates whether preemption is active for
this port. This bit will be set if preemption is both enabled and has
completed the verification process.
- TXSTS (Merge status):
This read-only status field provides the state of the MAC Merge sublayer
transmit status as defined in IEEE Std 802.3-2018 Clause 99.
00 Transmit preemption is inactive
01 Transmit preemption is active
10 Reserved
11 Reserved
However none of these 2 fields offer reliable reporting to software.
When connecting ENETC to a link partner which is not capable of Frame
Preemption, the expectation is that ENETC's verification should fail
(VSTS=4) and its MM TX direction should be inactive (LPA=0, TXSTS=00)
even though the MM TX is enabled (ME=1). But surprise, the LPA bit of
MMCSR stays set even if VSTS=4 and ME=1.
OTOH, the TXSTS field has the opposite problem. I cannot get its value
to change from 0, even when connecting to a link partner capable of
frame preemption, which does respond to its verification frames (ME=1
and VSTS=3, "SUCCEEDED").
The only option with such buggy hardware seems to be to reimplement the
formula for calculating tx-active in software, which is for tx-enabled
to be true, and for the verify-status to be either SUCCEEDED, or
DISABLED.
Without reliable tx-active reporting, we have no good indication when
to commit the preemptible traffic classes to hardware, which makes it
possible (but not desirable) to send preemptible traffic to a link
partner incapable of receiving it. However, currently we do not have the
logic to wait for TX to be active yet, so the impact is limited.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18 14:14:52 +03:00
|
|
|
state->tx_active = state->tx_enabled &&
|
|
|
|
(state->verify_status == ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED ||
|
|
|
|
state->verify_status == ETHTOOL_MM_VERIFY_STATUS_DISABLED);
|
2023-02-06 11:45:30 +02:00
|
|
|
state->verify_enabled = !(val & ENETC_MMCSR_VDIS);
|
|
|
|
state->verify_time = ENETC_MMCSR_GET_VT(val);
|
|
|
|
/* A verifyTime of 128 ms would exceed the 7 bit width
|
|
|
|
* of the ENETC_MMCSR_VT field
|
|
|
|
*/
|
|
|
|
state->max_verify_time = 127;
|
|
|
|
|
|
|
|
mutex_unlock(&priv->mm_lock);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-04-18 14:14:53 +03:00
|
|
|
static int enetc_mm_wait_tx_active(struct enetc_hw *hw, int verify_time)
|
|
|
|
{
|
|
|
|
int timeout = verify_time * USEC_PER_MSEC * ENETC_MM_VERIFY_RETRIES;
|
|
|
|
u32 val;
|
|
|
|
|
|
|
|
/* This will time out after the standard value of 3 verification
|
|
|
|
* attempts. To not sleep forever, it relies on a non-zero verify_time,
|
|
|
|
* guarantee which is provided by the ethtool nlattr policy.
|
|
|
|
*/
|
|
|
|
return read_poll_timeout(enetc_port_rd, val,
|
|
|
|
ENETC_MMCSR_GET_VSTS(val) == 3,
|
|
|
|
ENETC_MM_VERIFY_SLEEP_US, timeout,
|
|
|
|
true, hw, ENETC_MMCSR);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void enetc_set_ptcfpr(struct enetc_hw *hw, u8 preemptible_tcs)
|
|
|
|
{
|
|
|
|
u32 val;
|
|
|
|
int tc;
|
|
|
|
|
|
|
|
for (tc = 0; tc < 8; tc++) {
|
|
|
|
val = enetc_port_rd(hw, ENETC_PTCFPR(tc));
|
|
|
|
|
|
|
|
if (preemptible_tcs & BIT(tc))
|
|
|
|
val |= ENETC_PTCFPR_FPE;
|
|
|
|
else
|
|
|
|
val &= ~ENETC_PTCFPR_FPE;
|
|
|
|
|
|
|
|
enetc_port_wr(hw, ENETC_PTCFPR(tc), val);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ENETC does not have an IRQ to notify changes to the MAC Merge TX status
|
|
|
|
* (active/inactive), but the preemptible traffic classes should only be
|
|
|
|
* committed to hardware once TX is active. Resort to polling.
|
|
|
|
*/
|
|
|
|
void enetc_mm_commit_preemptible_tcs(struct enetc_ndev_priv *priv)
|
|
|
|
{
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
|
|
|
u8 preemptible_tcs = 0;
|
|
|
|
u32 val;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
val = enetc_port_rd(hw, ENETC_MMCSR);
|
|
|
|
if (!(val & ENETC_MMCSR_ME))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (!(val & ENETC_MMCSR_VDIS)) {
|
|
|
|
err = enetc_mm_wait_tx_active(hw, ENETC_MMCSR_GET_VT(val));
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
preemptible_tcs = priv->preemptible_tcs;
|
|
|
|
out:
|
|
|
|
enetc_set_ptcfpr(hw, preemptible_tcs);
|
|
|
|
}
|
|
|
|
|
net: enetc: workaround for unresponsive pMAC after receiving express traffic
I have observed an issue where the RX direction of the LS1028A ENETC pMAC
seems unresponsive. The minimal procedure to reproduce the issue is:
1. Connect ENETC port 0 with a loopback RJ45 cable to one of the Felix
switch ports (0).
2. Bring the ports up (MAC Merge layer is not enabled on either end).
3. Send a large quantity of unidirectional (express) traffic from Felix
to ENETC. I tried altering frame size and frame count, and it doesn't
appear to be specific to either of them, but rather, to the quantity
of octets received. Lowering the frame count, the minimum quantity of
packets to reproduce relatively consistently seems to be around 37000
frames at 1514 octets (w/o FCS) each.
4. Using ethtool --set-mm, enable the pMAC in the Felix and in the ENETC
ports, in both RX and TX directions, and with verification on both
ends.
5. Wait for verification to complete on both sides.
6. Configure a traffic class as preemptible on both ends.
7. Send some packets again.
The issue is at step 5, where the verification process of ENETC ends
(meaning that Felix responds with an SMD-R and ENETC sees the response),
but the verification process of Felix never ends (it remains VERIFYING).
If step 3 is skipped or if ENETC receives less traffic than
approximately that threshold, the test runs all the way through
(verification succeeds on both ends, preemptible traffic passes fine).
If, between step 4 and 5, the step below is also introduced:
4.1. Disable and re-enable PM0_COMMAND_CONFIG bit RX_EN
then again, the sequence of steps runs all the way through, and
verification succeeds, even if there was the previous RX traffic
injected into ENETC.
Traffic sent *by* the ENETC port prior to enabling the MAC Merge layer
does not seem to influence the verification result, only received
traffic does.
The LS1028A manual does not mention any relationship between
PM0_COMMAND_CONFIG and MMCSR, and the hardware people don't seem to
know for now either.
The bit that is toggled to work around the issue is also toggled
by enetc_mac_enable(), called from phylink's mac_link_down() and
mac_link_up() methods - which is how the workaround was found:
verification would work after a link down/up.
Fixes: c7b9e8086902 ("net: enetc: add support for MAC Merge layer")
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Link: https://lore.kernel.org/r/20230411192645.1896048-1-vladimir.oltean@nxp.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-11 22:26:45 +03:00
|
|
|
/* FIXME: Workaround for the link partner's verification failing if ENETC
|
|
|
|
* priorly received too much express traffic. The documentation doesn't
|
|
|
|
* suggest this is needed.
|
|
|
|
*/
|
|
|
|
static void enetc_restart_emac_rx(struct enetc_si *si)
|
|
|
|
{
|
|
|
|
u32 val = enetc_port_rd(&si->hw, ENETC_PM0_CMD_CFG);
|
|
|
|
|
|
|
|
enetc_port_wr(&si->hw, ENETC_PM0_CMD_CFG, val & ~ENETC_PM0_RX_EN);
|
|
|
|
|
|
|
|
if (val & ENETC_PM0_RX_EN)
|
|
|
|
enetc_port_wr(&si->hw, ENETC_PM0_CMD_CFG, val);
|
|
|
|
}
|
|
|
|
|
2023-02-06 11:45:30 +02:00
|
|
|
static int enetc_set_mm(struct net_device *ndev, struct ethtool_mm_cfg *cfg,
|
|
|
|
struct netlink_ext_ack *extack)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
|
|
|
struct enetc_si *si = priv->si;
|
|
|
|
u32 val, add_frag_size;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (!(si->hw_features & ENETC_SI_F_QBU))
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
err = ethtool_mm_frag_size_min_to_add(cfg->tx_min_frag_size,
|
|
|
|
&add_frag_size, extack);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
mutex_lock(&priv->mm_lock);
|
|
|
|
|
|
|
|
val = enetc_port_rd(hw, ENETC_PFPMR);
|
|
|
|
if (cfg->pmac_enabled)
|
|
|
|
val |= ENETC_PFPMR_PMACE;
|
|
|
|
else
|
|
|
|
val &= ~ENETC_PFPMR_PMACE;
|
|
|
|
enetc_port_wr(hw, ENETC_PFPMR, val);
|
|
|
|
|
|
|
|
val = enetc_port_rd(hw, ENETC_MMCSR);
|
|
|
|
|
|
|
|
if (cfg->verify_enabled)
|
|
|
|
val &= ~ENETC_MMCSR_VDIS;
|
|
|
|
else
|
|
|
|
val |= ENETC_MMCSR_VDIS;
|
|
|
|
|
|
|
|
if (cfg->tx_enabled)
|
|
|
|
priv->active_offloads |= ENETC_F_QBU;
|
|
|
|
else
|
|
|
|
priv->active_offloads &= ~ENETC_F_QBU;
|
|
|
|
|
2023-04-18 14:14:51 +03:00
|
|
|
/* If link is up, enable/disable MAC Merge right away */
|
|
|
|
if (!(val & ENETC_MMCSR_LINK_FAIL)) {
|
|
|
|
if (!!(priv->active_offloads & ENETC_F_QBU))
|
|
|
|
val |= ENETC_MMCSR_ME;
|
|
|
|
else
|
|
|
|
val &= ~ENETC_MMCSR_ME;
|
|
|
|
}
|
2023-02-06 11:45:30 +02:00
|
|
|
|
|
|
|
val &= ~ENETC_MMCSR_VT_MASK;
|
|
|
|
val |= ENETC_MMCSR_VT(cfg->verify_time);
|
|
|
|
|
|
|
|
val &= ~ENETC_MMCSR_RAFS_MASK;
|
|
|
|
val |= ENETC_MMCSR_RAFS(add_frag_size);
|
|
|
|
|
|
|
|
enetc_port_wr(hw, ENETC_MMCSR, val);
|
|
|
|
|
net: enetc: workaround for unresponsive pMAC after receiving express traffic
I have observed an issue where the RX direction of the LS1028A ENETC pMAC
seems unresponsive. The minimal procedure to reproduce the issue is:
1. Connect ENETC port 0 with a loopback RJ45 cable to one of the Felix
switch ports (0).
2. Bring the ports up (MAC Merge layer is not enabled on either end).
3. Send a large quantity of unidirectional (express) traffic from Felix
to ENETC. I tried altering frame size and frame count, and it doesn't
appear to be specific to either of them, but rather, to the quantity
of octets received. Lowering the frame count, the minimum quantity of
packets to reproduce relatively consistently seems to be around 37000
frames at 1514 octets (w/o FCS) each.
4. Using ethtool --set-mm, enable the pMAC in the Felix and in the ENETC
ports, in both RX and TX directions, and with verification on both
ends.
5. Wait for verification to complete on both sides.
6. Configure a traffic class as preemptible on both ends.
7. Send some packets again.
The issue is at step 5, where the verification process of ENETC ends
(meaning that Felix responds with an SMD-R and ENETC sees the response),
but the verification process of Felix never ends (it remains VERIFYING).
If step 3 is skipped or if ENETC receives less traffic than
approximately that threshold, the test runs all the way through
(verification succeeds on both ends, preemptible traffic passes fine).
If, between step 4 and 5, the step below is also introduced:
4.1. Disable and re-enable PM0_COMMAND_CONFIG bit RX_EN
then again, the sequence of steps runs all the way through, and
verification succeeds, even if there was the previous RX traffic
injected into ENETC.
Traffic sent *by* the ENETC port prior to enabling the MAC Merge layer
does not seem to influence the verification result, only received
traffic does.
The LS1028A manual does not mention any relationship between
PM0_COMMAND_CONFIG and MMCSR, and the hardware people don't seem to
know for now either.
The bit that is toggled to work around the issue is also toggled
by enetc_mac_enable(), called from phylink's mac_link_down() and
mac_link_up() methods - which is how the workaround was found:
verification would work after a link down/up.
Fixes: c7b9e8086902 ("net: enetc: add support for MAC Merge layer")
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Link: https://lore.kernel.org/r/20230411192645.1896048-1-vladimir.oltean@nxp.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-11 22:26:45 +03:00
|
|
|
enetc_restart_emac_rx(priv->si);
|
|
|
|
|
2023-04-18 14:14:53 +03:00
|
|
|
enetc_mm_commit_preemptible_tcs(priv);
|
|
|
|
|
2023-02-06 11:45:30 +02:00
|
|
|
mutex_unlock(&priv->mm_lock);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* When the link is lost, the verification state machine goes to the FAILED
|
|
|
|
* state and doesn't restart on its own after a new link up event.
|
|
|
|
* According to 802.3 Figure 99-8 - Verify state diagram, the LINK_FAIL bit
|
|
|
|
* should have been sufficient to re-trigger verification, but for ENETC it
|
|
|
|
* doesn't. As a workaround, we need to toggle the Merge Enable bit to
|
|
|
|
* re-trigger verification when link comes up.
|
|
|
|
*/
|
|
|
|
void enetc_mm_link_state_update(struct enetc_ndev_priv *priv, bool link)
|
|
|
|
{
|
|
|
|
struct enetc_hw *hw = &priv->si->hw;
|
|
|
|
u32 val;
|
|
|
|
|
|
|
|
mutex_lock(&priv->mm_lock);
|
|
|
|
|
|
|
|
val = enetc_port_rd(hw, ENETC_MMCSR);
|
|
|
|
|
|
|
|
if (link) {
|
|
|
|
val &= ~ENETC_MMCSR_LINK_FAIL;
|
|
|
|
if (priv->active_offloads & ENETC_F_QBU)
|
|
|
|
val |= ENETC_MMCSR_ME;
|
|
|
|
} else {
|
|
|
|
val |= ENETC_MMCSR_LINK_FAIL;
|
|
|
|
if (priv->active_offloads & ENETC_F_QBU)
|
|
|
|
val &= ~ENETC_MMCSR_ME;
|
|
|
|
}
|
|
|
|
|
|
|
|
enetc_port_wr(hw, ENETC_MMCSR, val);
|
|
|
|
|
2023-04-18 14:14:53 +03:00
|
|
|
enetc_mm_commit_preemptible_tcs(priv);
|
|
|
|
|
2023-02-06 11:45:30 +02:00
|
|
|
mutex_unlock(&priv->mm_lock);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(enetc_mm_link_state_update);
|
|
|
|
|
2024-10-30 17:39:22 +08:00
|
|
|
const struct ethtool_ops enetc_pf_ethtool_ops = {
|
2020-07-21 10:55:21 +03:00
|
|
|
.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
|
2020-07-21 10:55:22 +03:00
|
|
|
ETHTOOL_COALESCE_MAX_FRAMES |
|
|
|
|
ETHTOOL_COALESCE_USE_ADAPTIVE_RX,
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
.get_regs_len = enetc_get_reglen,
|
|
|
|
.get_regs = enetc_get_regs,
|
2019-01-22 15:29:55 +02:00
|
|
|
.get_sset_count = enetc_get_sset_count,
|
|
|
|
.get_strings = enetc_get_strings,
|
|
|
|
.get_ethtool_stats = enetc_get_ethtool_stats,
|
2022-09-09 14:38:00 +03:00
|
|
|
.get_pause_stats = enetc_get_pause_stats,
|
|
|
|
.get_rmon_stats = enetc_get_rmon_stats,
|
|
|
|
.get_eth_ctrl_stats = enetc_get_eth_ctrl_stats,
|
|
|
|
.get_eth_mac_stats = enetc_get_eth_mac_stats,
|
2019-01-22 15:29:57 +02:00
|
|
|
.get_rxnfc = enetc_get_rxnfc,
|
|
|
|
.set_rxnfc = enetc_set_rxnfc,
|
|
|
|
.get_rxfh_key_size = enetc_get_rxfh_key_size,
|
|
|
|
.get_rxfh_indir_size = enetc_get_rxfh_indir_size,
|
|
|
|
.get_rxfh = enetc_get_rxfh,
|
|
|
|
.set_rxfh = enetc_set_rxfh,
|
2025-06-14 11:06:38 -07:00
|
|
|
.get_rxfh_fields = enetc_get_rxfh_fields,
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
.get_ringparam = enetc_get_ringparam,
|
2020-07-21 10:55:21 +03:00
|
|
|
.get_coalesce = enetc_get_coalesce,
|
|
|
|
.set_coalesce = enetc_set_coalesce,
|
2020-10-07 12:48:23 +03:00
|
|
|
.get_link_ksettings = enetc_get_link_ksettings,
|
|
|
|
.set_link_ksettings = enetc_set_link_ksettings,
|
2019-05-15 19:08:58 +03:00
|
|
|
.get_link = ethtool_op_get_link,
|
2019-05-23 02:33:33 +00:00
|
|
|
.get_ts_info = enetc_get_ts_info,
|
2019-11-07 09:40:00 +01:00
|
|
|
.get_wol = enetc_get_wol,
|
|
|
|
.set_wol = enetc_set_wol,
|
2021-04-17 02:42:25 +03:00
|
|
|
.get_pauseparam = enetc_get_pauseparam,
|
|
|
|
.set_pauseparam = enetc_set_pauseparam,
|
2023-02-06 11:45:30 +02:00
|
|
|
.get_mm = enetc_get_mm,
|
|
|
|
.set_mm = enetc_set_mm,
|
2023-02-06 11:45:31 +02:00
|
|
|
.get_mm_stats = enetc_get_mm_stats,
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
};
|
|
|
|
|
2024-10-30 17:39:22 +08:00
|
|
|
const struct ethtool_ops enetc_vf_ethtool_ops = {
|
2020-07-21 10:55:21 +03:00
|
|
|
.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
|
2020-07-21 10:55:22 +03:00
|
|
|
ETHTOOL_COALESCE_MAX_FRAMES |
|
|
|
|
ETHTOOL_COALESCE_USE_ADAPTIVE_RX,
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
.get_regs_len = enetc_get_reglen,
|
|
|
|
.get_regs = enetc_get_regs,
|
2019-01-22 15:29:55 +02:00
|
|
|
.get_sset_count = enetc_get_sset_count,
|
|
|
|
.get_strings = enetc_get_strings,
|
|
|
|
.get_ethtool_stats = enetc_get_ethtool_stats,
|
2019-01-22 15:29:57 +02:00
|
|
|
.get_rxnfc = enetc_get_rxnfc,
|
|
|
|
.set_rxnfc = enetc_set_rxnfc,
|
|
|
|
.get_rxfh_indir_size = enetc_get_rxfh_indir_size,
|
|
|
|
.get_rxfh = enetc_get_rxfh,
|
|
|
|
.set_rxfh = enetc_set_rxfh,
|
2025-06-14 11:06:38 -07:00
|
|
|
.get_rxfh_fields = enetc_get_rxfh_fields,
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
.get_ringparam = enetc_get_ringparam,
|
2020-07-21 10:55:21 +03:00
|
|
|
.get_coalesce = enetc_get_coalesce,
|
|
|
|
.set_coalesce = enetc_set_coalesce,
|
2019-05-15 19:08:58 +03:00
|
|
|
.get_link = ethtool_op_get_link,
|
2019-05-23 02:33:33 +00:00
|
|
|
.get_ts_info = enetc_get_ts_info,
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
};
|
|
|
|
|
2024-10-30 17:39:22 +08:00
|
|
|
const struct ethtool_ops enetc4_pf_ethtool_ops = {
|
|
|
|
.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
|
|
|
|
ETHTOOL_COALESCE_MAX_FRAMES |
|
|
|
|
ETHTOOL_COALESCE_USE_ADAPTIVE_RX,
|
|
|
|
.get_ringparam = enetc_get_ringparam,
|
|
|
|
.get_coalesce = enetc_get_coalesce,
|
|
|
|
.set_coalesce = enetc_set_coalesce,
|
|
|
|
.get_link_ksettings = enetc_get_link_ksettings,
|
|
|
|
.set_link_ksettings = enetc_set_link_ksettings,
|
|
|
|
.get_link = ethtool_op_get_link,
|
|
|
|
.get_wol = enetc_get_wol,
|
|
|
|
.set_wol = enetc_set_wol,
|
|
|
|
.get_pauseparam = enetc_get_pauseparam,
|
|
|
|
.set_pauseparam = enetc_set_pauseparam,
|
2025-05-06 16:07:29 +08:00
|
|
|
.get_rxnfc = enetc4_get_rxnfc,
|
|
|
|
.get_rxfh_key_size = enetc_get_rxfh_key_size,
|
|
|
|
.get_rxfh_indir_size = enetc_get_rxfh_indir_size,
|
|
|
|
.get_rxfh = enetc_get_rxfh,
|
|
|
|
.set_rxfh = enetc_set_rxfh,
|
2025-06-14 11:06:38 -07:00
|
|
|
.get_rxfh_fields = enetc_get_rxfh_fields,
|
2024-10-30 17:39:22 +08:00
|
|
|
};
|
|
|
|
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
void enetc_set_ethtool_ops(struct net_device *ndev)
|
|
|
|
{
|
|
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
|
|
|
2024-10-30 17:39:22 +08:00
|
|
|
ndev->ethtool_ops = priv->si->drvdata->eth_ops;
|
enetc: Introduce basic PF and VF ENETC ethernet drivers
ENETC is a multi-port virtualized Ethernet controller supporting GbE
designs and Time-Sensitive Networking (TSN) functionality.
ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated
Endpoint (RCIE). As such, it contains multiple physical (PF) and
virtual (VF) PCIe functions, discoverable by standard PCI Express.
Introduce basic PF and VF ENETC ethernet drivers. The PF has access to
the ENETC Port registers and resources and makes the required privileged
configurations for the underlying VF devices. Common functionality is
controlled through so called System Interface (SI) register blocks, PFs
and VFs own a SI each. Though SI register blocks are almost identical,
there are a few privileged SI level controls that are accessible only to
PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI).
As such, the bulk of the code, including datapath processing, basic h/w
offload support and generic pci related configuration, is shared between
the 2 drivers and is factored out in common source files (i.e. enetc.c).
Major functionalities included (for both drivers):
MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs
to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and
jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G
support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and
initial control ring support, VLAN extraction/ insertion, PF Rx VLAN
CTAG filtering, VF mac address config support, VF VLAN isolation support,
etc.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-22 15:29:54 +02:00
|
|
|
}
|
2023-01-19 18:04:26 +02:00
|
|
|
EXPORT_SYMBOL_GPL(enetc_set_ethtool_ops);
|