linux/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c

5126 lines
132 KiB
C
Raw Normal View History

// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
/* Copyright 2014-2016 Freescale Semiconductor Inc.
* Copyright 2016-2022 NXP
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/etherdevice.h>
#include <linux/of_net.h>
#include <linux/interrupt.h>
#include <linux/kthread.h>
#include <linux/iommu.h>
#include <linux/fsl/mc.h>
#include <linux/bpf.h>
#include <linux/bpf_trace.h>
#include <linux/fsl/ptp_qoriq.h>
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
#include <linux/ptp_classify.h>
#include <net/pkt_cls.h>
#include <net/sock.h>
#include <net/tso.h>
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
#include <net/xdp_sock_drv.h>
#include "dpaa2-eth.h"
/* CREATE_TRACE_POINTS only needs to be defined once. Other dpa files
* using trace events only need to #include <trace/events/sched.h>
*/
#define CREATE_TRACE_POINTS
#include "dpaa2-eth-trace.h"
MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Freescale Semiconductor, Inc");
MODULE_DESCRIPTION("Freescale DPAA2 Ethernet Driver");
struct ptp_qoriq *dpaa2_ptp;
EXPORT_SYMBOL(dpaa2_ptp);
dpaa2-eth: Update SINGLE_STEP register access DPAA2 MAC supports 1588 one step timestamping. If this option is enabled then for each transmitted PTP event packet, the 1588 SINGLE_STEP register is accessed to modify the following fields: -offset of the correction field inside the PTP packet -UDP checksum update bit, in case the PTP event packet has UDP encapsulation These values can change any time, because there may be multiple PTP clients connected, that receive various 1588 frame types: - L2 only frame - UDP / Ipv4 - UDP / Ipv6 - other The current implementation uses dpni_set_single_step_cfg to update the SINLGE_STEP register. Using an MC command on the Tx datapath for each transmitted 1588 message introduces high delays, leading to low throughput and consequently to a small number of supported PTP clients. Besides these, the nanosecond correction field from the PTP packet will contain the high delay from the driver which together with the originTimestamp will render timestamp values that are unacceptable in a GM clock implementation. This patch updates the Tx datapath for 1588 messages when single step timestamp is enabled and provides direct access to SINGLE_STEP register, eliminating the overhead caused by the dpni_set_single_step_cfg MC command. MC version >= 10.32 implements this functionality. If the MC version does not have support for returning the single step register base address, the driver will use dpni_set_single_step_cfg command for updates operations. All the delay introduced by dpni_set_single_step_cfg function will be eliminated (if MC version has support for returning the base address of the single step register), improving the egress driver performance for PTP packets when single step timestamping is enabled. Before these changes the maximum throughput for 1588 messages with single step hardware timestamp enabled was around 2000pps. After the updates the throughput increased up to 32.82 Mbps / 46631.02 pps. Signed-off-by: Radu Bulie <radu-andrei.bulie@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-18 22:22:01 +02:00
static void dpaa2_eth_detect_features(struct dpaa2_eth_priv *priv)
{
priv->features = 0;
if (dpaa2_eth_cmp_dpni_ver(priv, DPNI_PTP_ONESTEP_VER_MAJOR,
DPNI_PTP_ONESTEP_VER_MINOR) >= 0)
priv->features |= DPAA2_ETH_FEATURE_ONESTEP_CFG_DIRECT;
}
static void dpaa2_update_ptp_onestep_indirect(struct dpaa2_eth_priv *priv,
u32 offset, u8 udp)
{
struct dpni_single_step_cfg cfg;
cfg.en = 1;
cfg.ch_update = udp;
cfg.offset = offset;
cfg.peer_delay = 0;
if (dpni_set_single_step_cfg(priv->mc_io, 0, priv->mc_token, &cfg))
WARN_ONCE(1, "Failed to set single step register");
}
static void dpaa2_update_ptp_onestep_direct(struct dpaa2_eth_priv *priv,
u32 offset, u8 udp)
{
u32 val = 0;
val = DPAA2_PTP_SINGLE_STEP_ENABLE |
DPAA2_PTP_SINGLE_CORRECTION_OFF(offset);
if (udp)
val |= DPAA2_PTP_SINGLE_STEP_CH;
if (priv->onestep_reg_base)
writel(val, priv->onestep_reg_base);
}
static void dpaa2_ptp_onestep_reg_update_method(struct dpaa2_eth_priv *priv)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpni_single_step_cfg ptp_cfg;
priv->dpaa2_set_onestep_params_cb = dpaa2_update_ptp_onestep_indirect;
if (!(priv->features & DPAA2_ETH_FEATURE_ONESTEP_CFG_DIRECT))
return;
if (dpni_get_single_step_cfg(priv->mc_io, 0,
priv->mc_token, &ptp_cfg)) {
dev_err(dev, "dpni_get_single_step_cfg cannot retrieve onestep reg, falling back to indirect update\n");
return;
}
if (!ptp_cfg.ptp_onestep_reg_base) {
dev_err(dev, "1588 onestep reg not available, falling back to indirect update\n");
return;
}
priv->onestep_reg_base = ioremap(ptp_cfg.ptp_onestep_reg_base,
sizeof(u32));
if (!priv->onestep_reg_base) {
dev_err(dev, "1588 onestep reg cannot be mapped, falling back to indirect update\n");
return;
}
priv->dpaa2_set_onestep_params_cb = dpaa2_update_ptp_onestep_direct;
}
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
void *dpaa2_iova_to_virt(struct iommu_domain *domain,
dma_addr_t iova_addr)
{
phys_addr_t phys_addr;
phys_addr = domain ? iommu_iova_to_phys(domain, iova_addr) : iova_addr;
return phys_to_virt(phys_addr);
}
static void dpaa2_eth_validate_rx_csum(struct dpaa2_eth_priv *priv,
u32 fd_status,
struct sk_buff *skb)
{
skb_checksum_none_assert(skb);
/* HW checksum validation is disabled, nothing to do here */
if (!(priv->net_dev->features & NETIF_F_RXCSUM))
return;
/* Read checksum validation bits */
if (!((fd_status & DPAA2_FAS_L3CV) &&
(fd_status & DPAA2_FAS_L4CV)))
return;
/* Inform the stack there's no need to compute L3/L4 csum anymore */
skb->ip_summed = CHECKSUM_UNNECESSARY;
}
/* Free a received FD.
* Not to be used for Tx conf FDs or on any other paths.
*/
static void dpaa2_eth_free_rx_fd(struct dpaa2_eth_priv *priv,
const struct dpaa2_fd *fd,
void *vaddr)
{
struct device *dev = priv->net_dev->dev.parent;
dma_addr_t addr = dpaa2_fd_get_addr(fd);
u8 fd_format = dpaa2_fd_get_format(fd);
struct dpaa2_sg_entry *sgt;
void *sg_vaddr;
int i;
/* If single buffer frame, just free the data buffer */
if (fd_format == dpaa2_fd_single)
goto free_buf;
else if (fd_format != dpaa2_fd_sg)
/* We don't support any other format */
return;
/* For S/G frames, we first need to free all SG entries
* except the first one, which was taken care of already
*/
sgt = vaddr + dpaa2_fd_get_offset(fd);
for (i = 1; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
addr = dpaa2_sg_get_addr(&sgt[i]);
sg_vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
dma_unmap_page(dev, addr, priv->rx_buf_size,
DMA_BIDIRECTIONAL);
free_pages((unsigned long)sg_vaddr, 0);
if (dpaa2_sg_is_final(&sgt[i]))
break;
}
free_buf:
free_pages((unsigned long)vaddr, 0);
}
/* Build a linear skb based on a single-buffer frame descriptor */
static struct sk_buff *dpaa2_eth_build_linear_skb(struct dpaa2_eth_channel *ch,
const struct dpaa2_fd *fd,
void *fd_vaddr)
{
struct sk_buff *skb = NULL;
u16 fd_offset = dpaa2_fd_get_offset(fd);
u32 fd_length = dpaa2_fd_get_len(fd);
ch->buf_count--;
skb = build_skb(fd_vaddr, DPAA2_ETH_RX_BUF_RAW_SIZE);
if (unlikely(!skb))
return NULL;
skb_reserve(skb, fd_offset);
skb_put(skb, fd_length);
return skb;
}
/* Build a non linear (fragmented) skb based on a S/G table */
static struct sk_buff *dpaa2_eth_build_frag_skb(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
struct dpaa2_sg_entry *sgt)
{
struct sk_buff *skb = NULL;
struct device *dev = priv->net_dev->dev.parent;
void *sg_vaddr;
dma_addr_t sg_addr;
u16 sg_offset;
u32 sg_length;
struct page *page, *head_page;
int page_offset;
int i;
for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
struct dpaa2_sg_entry *sge = &sgt[i];
/* NOTE: We only support SG entries in dpaa2_sg_single format,
* but this is the only format we may receive from HW anyway
*/
/* Get the address and length from the S/G entry */
sg_addr = dpaa2_sg_get_addr(sge);
sg_vaddr = dpaa2_iova_to_virt(priv->iommu_domain, sg_addr);
dma_unmap_page(dev, sg_addr, priv->rx_buf_size,
DMA_BIDIRECTIONAL);
sg_length = dpaa2_sg_get_len(sge);
if (i == 0) {
/* We build the skb around the first data buffer */
skb = build_skb(sg_vaddr, DPAA2_ETH_RX_BUF_RAW_SIZE);
if (unlikely(!skb)) {
/* Free the first SG entry now, since we already
* unmapped it and obtained the virtual address
*/
free_pages((unsigned long)sg_vaddr, 0);
/* We still need to subtract the buffers used
* by this FD from our software counter
*/
while (!dpaa2_sg_is_final(&sgt[i]) &&
i < DPAA2_ETH_MAX_SG_ENTRIES)
i++;
break;
}
sg_offset = dpaa2_sg_get_offset(sge);
skb_reserve(skb, sg_offset);
skb_put(skb, sg_length);
} else {
/* Rest of the data buffers are stored as skb frags */
page = virt_to_page(sg_vaddr);
head_page = virt_to_head_page(sg_vaddr);
/* Offset in page (which may be compound).
* Data in subsequent SG entries is stored from the
* beginning of the buffer, so we don't need to add the
* sg_offset.
*/
page_offset = ((unsigned long)sg_vaddr &
(PAGE_SIZE - 1)) +
(page_address(page) - page_address(head_page));
skb_add_rx_frag(skb, i - 1, head_page, page_offset,
sg_length, priv->rx_buf_size);
}
if (dpaa2_sg_is_final(sge))
break;
}
WARN_ONCE(i == DPAA2_ETH_MAX_SG_ENTRIES, "Final bit not set in SGT");
/* Count all data buffers + SG table buffer */
ch->buf_count -= i + 2;
return skb;
}
/* Free buffers acquired from the buffer pool or which were meant to
* be released in the pool
*/
static void dpaa2_eth_free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array,
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
int count, bool xsk_zc)
{
struct device *dev = priv->net_dev->dev.parent;
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
struct dpaa2_eth_swa *swa;
struct xdp_buff *xdp_buff;
void *vaddr;
int i;
for (i = 0; i < count; i++) {
vaddr = dpaa2_iova_to_virt(priv->iommu_domain, buf_array[i]);
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
if (!xsk_zc) {
dma_unmap_page(dev, buf_array[i], priv->rx_buf_size,
DMA_BIDIRECTIONAL);
free_pages((unsigned long)vaddr, 0);
} else {
swa = (struct dpaa2_eth_swa *)
(vaddr + DPAA2_ETH_RX_HWA_SIZE);
xdp_buff = swa->xsk.xdp_buff;
xsk_buff_free(xdp_buff);
}
}
}
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
void dpaa2_eth_recycle_buf(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
dma_addr_t addr)
{
int retries = 0;
int err;
ch->recycled_bufs[ch->recycled_bufs_cnt++] = addr;
if (ch->recycled_bufs_cnt < DPAA2_ETH_BUFS_PER_CMD)
return;
while ((err = dpaa2_io_service_release(ch->dpio, ch->bp->bpid,
ch->recycled_bufs,
ch->recycled_bufs_cnt)) == -EBUSY) {
if (retries++ >= DPAA2_ETH_SWP_BUSY_RETRIES)
break;
cpu_relax();
}
if (err) {
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
dpaa2_eth_free_bufs(priv, ch->recycled_bufs,
ch->recycled_bufs_cnt, ch->xsk_zc);
ch->buf_count -= ch->recycled_bufs_cnt;
}
ch->recycled_bufs_cnt = 0;
}
static int dpaa2_eth_xdp_flush(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_fq *fq,
struct dpaa2_eth_xdp_fds *xdp_fds)
{
int total_enqueued = 0, retries = 0, enqueued;
struct dpaa2_eth_drv_stats *percpu_extras;
int num_fds, err, max_retries;
struct dpaa2_fd *fds;
percpu_extras = this_cpu_ptr(priv->percpu_extras);
/* try to enqueue all the FDs until the max number of retries is hit */
fds = xdp_fds->fds;
num_fds = xdp_fds->num;
max_retries = num_fds * DPAA2_ETH_ENQUEUE_RETRIES;
while (total_enqueued < num_fds && retries < max_retries) {
err = priv->enqueue(priv, fq, &fds[total_enqueued],
0, num_fds - total_enqueued, &enqueued);
if (err == -EBUSY) {
percpu_extras->tx_portal_busy += ++retries;
continue;
}
total_enqueued += enqueued;
}
xdp_fds->num = 0;
return total_enqueued;
}
static void dpaa2_eth_xdp_tx_flush(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
struct dpaa2_eth_fq *fq)
{
struct rtnl_link_stats64 *percpu_stats;
struct dpaa2_fd *fds;
int enqueued, i;
percpu_stats = this_cpu_ptr(priv->percpu_stats);
// enqueue the array of XDP_TX frames
enqueued = dpaa2_eth_xdp_flush(priv, fq, &fq->xdp_tx_fds);
/* update statistics */
percpu_stats->tx_packets += enqueued;
fds = fq->xdp_tx_fds.fds;
for (i = 0; i < enqueued; i++) {
percpu_stats->tx_bytes += dpaa2_fd_get_len(&fds[i]);
ch->stats.xdp_tx++;
}
for (i = enqueued; i < fq->xdp_tx_fds.num; i++) {
dpaa2_eth_recycle_buf(priv, ch, dpaa2_fd_get_addr(&fds[i]));
percpu_stats->tx_errors++;
ch->stats.xdp_tx_err++;
}
fq->xdp_tx_fds.num = 0;
}
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
void dpaa2_eth_xdp_enqueue(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
struct dpaa2_fd *fd,
void *buf_start, u16 queue_id)
{
struct dpaa2_faead *faead;
struct dpaa2_fd *dest_fd;
struct dpaa2_eth_fq *fq;
u32 ctrl, frc;
/* Mark the egress frame hardware annotation area as valid */
frc = dpaa2_fd_get_frc(fd);
dpaa2_fd_set_frc(fd, frc | DPAA2_FD_FRC_FAEADV);
dpaa2_fd_set_ctrl(fd, DPAA2_FD_CTRL_ASAL);
/* Instruct hardware to release the FD buffer directly into
* the buffer pool once transmission is completed, instead of
* sending a Tx confirmation frame to us
*/
ctrl = DPAA2_FAEAD_A4V | DPAA2_FAEAD_A2V | DPAA2_FAEAD_EBDDV;
faead = dpaa2_get_faead(buf_start, false);
faead->ctrl = cpu_to_le32(ctrl);
faead->conf_fqid = 0;
fq = &priv->fq[queue_id];
dest_fd = &fq->xdp_tx_fds.fds[fq->xdp_tx_fds.num++];
memcpy(dest_fd, fd, sizeof(*dest_fd));
if (fq->xdp_tx_fds.num < DEV_MAP_BULK_SIZE)
return;
dpaa2_eth_xdp_tx_flush(priv, ch, fq);
}
static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
struct dpaa2_eth_fq *rx_fq,
struct dpaa2_fd *fd, void *vaddr)
{
dma_addr_t addr = dpaa2_fd_get_addr(fd);
struct bpf_prog *xdp_prog;
struct xdp_buff xdp;
u32 xdp_act = XDP_PASS;
int err, offset;
xdp_prog = READ_ONCE(ch->xdp.prog);
if (!xdp_prog)
goto out;
offset = dpaa2_fd_get_offset(fd) - XDP_PACKET_HEADROOM;
xdp_init_buff(&xdp, DPAA2_ETH_RX_BUF_RAW_SIZE - offset, &ch->xdp_rxq);
xdp_prepare_buff(&xdp, vaddr + offset, XDP_PACKET_HEADROOM,
dpaa2_fd_get_len(fd), false);
xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
/* xdp.data pointer may have changed */
dpaa2_fd_set_offset(fd, xdp.data - vaddr);
dpaa2_fd_set_len(fd, xdp.data_end - xdp.data);
switch (xdp_act) {
case XDP_PASS:
break;
case XDP_TX:
dpaa2_eth_xdp_enqueue(priv, ch, fd, vaddr, rx_fq->flowid);
break;
default:
bpf_warn_invalid_xdp_action(priv->net_dev, xdp_prog, xdp_act);
fallthrough;
case XDP_ABORTED:
trace_xdp_exception(priv->net_dev, xdp_prog, xdp_act);
fallthrough;
case XDP_DROP:
dpaa2_eth_recycle_buf(priv, ch, addr);
ch->stats.xdp_drop++;
break;
case XDP_REDIRECT:
dma_unmap_page(priv->net_dev->dev.parent, addr,
priv->rx_buf_size, DMA_BIDIRECTIONAL);
ch->buf_count--;
/* Allow redirect use of full headroom */
xdp.data_hard_start = vaddr;
xdp.frame_sz = DPAA2_ETH_RX_BUF_RAW_SIZE;
err = xdp_do_redirect(priv->net_dev, &xdp, xdp_prog);
if (unlikely(err)) {
addr = dma_map_page(priv->net_dev->dev.parent,
virt_to_page(vaddr), 0,
priv->rx_buf_size, DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(priv->net_dev->dev.parent, addr))) {
free_pages((unsigned long)vaddr, 0);
} else {
ch->buf_count++;
dpaa2_eth_recycle_buf(priv, ch, addr);
}
ch->stats.xdp_drop++;
} else {
ch->stats.xdp_redirect++;
}
break;
}
ch->xdp.res |= xdp_act;
out:
return xdp_act;
}
struct sk_buff *dpaa2_eth_alloc_skb(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
const struct dpaa2_fd *fd, u32 fd_length,
void *fd_vaddr)
{
u16 fd_offset = dpaa2_fd_get_offset(fd);
struct sk_buff *skb = NULL;
unsigned int skb_len;
skb_len = fd_length + dpaa2_eth_needed_headroom(NULL);
skb = napi_alloc_skb(&ch->napi, skb_len);
if (!skb)
return NULL;
skb_reserve(skb, dpaa2_eth_needed_headroom(NULL));
skb_put(skb, fd_length);
memcpy(skb->data, fd_vaddr + fd_offset, fd_length);
return skb;
}
static struct sk_buff *dpaa2_eth_copybreak(struct dpaa2_eth_channel *ch,
const struct dpaa2_fd *fd,
void *fd_vaddr)
{
struct dpaa2_eth_priv *priv = ch->priv;
u32 fd_length = dpaa2_fd_get_len(fd);
if (fd_length > priv->rx_copybreak)
return NULL;
return dpaa2_eth_alloc_skb(priv, ch, fd, fd_length, fd_vaddr);
}
void dpaa2_eth_receive_skb(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
const struct dpaa2_fd *fd, void *vaddr,
struct dpaa2_eth_fq *fq,
struct rtnl_link_stats64 *percpu_stats,
struct sk_buff *skb)
{
struct dpaa2_fas *fas;
u32 status = 0;
fas = dpaa2_get_fas(vaddr, false);
prefetch(fas);
prefetch(skb->data);
/* Get the timestamp value */
if (priv->rx_tstamp) {
struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
__le64 *ts = dpaa2_get_ts(vaddr, false);
u64 ns;
memset(shhwtstamps, 0, sizeof(*shhwtstamps));
ns = DPAA2_PTP_CLK_PERIOD_NS * le64_to_cpup(ts);
shhwtstamps->hwtstamp = ns_to_ktime(ns);
}
/* Check if we need to validate the L4 csum */
if (likely(dpaa2_fd_get_frc(fd) & DPAA2_FD_FRC_FASV)) {
status = le32_to_cpu(fas->status);
dpaa2_eth_validate_rx_csum(priv, status, skb);
}
skb->protocol = eth_type_trans(skb, priv->net_dev);
skb_record_rx_queue(skb, fq->flowid);
percpu_stats->rx_packets++;
percpu_stats->rx_bytes += dpaa2_fd_get_len(fd);
ch->stats.bytes_per_cdan += dpaa2_fd_get_len(fd);
list_add_tail(&skb->list, ch->rx_list);
}
/* Main Rx frame processing routine */
void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
const struct dpaa2_fd *fd,
struct dpaa2_eth_fq *fq)
{
dma_addr_t addr = dpaa2_fd_get_addr(fd);
u8 fd_format = dpaa2_fd_get_format(fd);
void *vaddr;
struct sk_buff *skb;
struct rtnl_link_stats64 *percpu_stats;
struct dpaa2_eth_drv_stats *percpu_extras;
struct device *dev = priv->net_dev->dev.parent;
bool recycle_rx_buf = false;
void *buf_data;
u32 xdp_act;
/* Tracing point */
trace_dpaa2_rx_fd(priv->net_dev, fd);
vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
dma_sync_single_for_cpu(dev, addr, priv->rx_buf_size,
DMA_BIDIRECTIONAL);
buf_data = vaddr + dpaa2_fd_get_offset(fd);
prefetch(buf_data);
percpu_stats = this_cpu_ptr(priv->percpu_stats);
percpu_extras = this_cpu_ptr(priv->percpu_extras);
if (fd_format == dpaa2_fd_single) {
xdp_act = dpaa2_eth_run_xdp(priv, ch, fq, (struct dpaa2_fd *)fd, vaddr);
if (xdp_act != XDP_PASS) {
percpu_stats->rx_packets++;
percpu_stats->rx_bytes += dpaa2_fd_get_len(fd);
return;
}
skb = dpaa2_eth_copybreak(ch, fd, vaddr);
if (!skb) {
dma_unmap_page(dev, addr, priv->rx_buf_size,
DMA_BIDIRECTIONAL);
skb = dpaa2_eth_build_linear_skb(ch, fd, vaddr);
} else {
recycle_rx_buf = true;
}
} else if (fd_format == dpaa2_fd_sg) {
WARN_ON(priv->xdp_prog);
dma_unmap_page(dev, addr, priv->rx_buf_size,
DMA_BIDIRECTIONAL);
skb = dpaa2_eth_build_frag_skb(priv, ch, buf_data);
free_pages((unsigned long)vaddr, 0);
percpu_extras->rx_sg_frames++;
percpu_extras->rx_sg_bytes += dpaa2_fd_get_len(fd);
} else {
/* We don't support any other format */
goto err_frame_format;
}
if (unlikely(!skb))
goto err_build_skb;
dpaa2_eth_receive_skb(priv, ch, fd, vaddr, fq, percpu_stats, skb);
if (recycle_rx_buf)
dpaa2_eth_recycle_buf(priv, ch, dpaa2_fd_get_addr(fd));
return;
err_build_skb:
dpaa2_eth_free_rx_fd(priv, fd, vaddr);
err_frame_format:
percpu_stats->rx_dropped++;
}
/* Processing of Rx frames received on the error FQ
* We check and print the error bits and then free the frame
*/
static void dpaa2_eth_rx_err(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
const struct dpaa2_fd *fd,
struct dpaa2_eth_fq *fq __always_unused)
{
struct device *dev = priv->net_dev->dev.parent;
dma_addr_t addr = dpaa2_fd_get_addr(fd);
u8 fd_format = dpaa2_fd_get_format(fd);
struct rtnl_link_stats64 *percpu_stats;
struct dpaa2_eth_trap_item *trap_item;
struct dpaa2_fapr *fapr;
struct sk_buff *skb;
void *buf_data;
void *vaddr;
vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
dma_sync_single_for_cpu(dev, addr, priv->rx_buf_size,
DMA_BIDIRECTIONAL);
buf_data = vaddr + dpaa2_fd_get_offset(fd);
if (fd_format == dpaa2_fd_single) {
dma_unmap_page(dev, addr, priv->rx_buf_size,
DMA_BIDIRECTIONAL);
skb = dpaa2_eth_build_linear_skb(ch, fd, vaddr);
} else if (fd_format == dpaa2_fd_sg) {
dma_unmap_page(dev, addr, priv->rx_buf_size,
DMA_BIDIRECTIONAL);
skb = dpaa2_eth_build_frag_skb(priv, ch, buf_data);
free_pages((unsigned long)vaddr, 0);
} else {
/* We don't support any other format */
dpaa2_eth_free_rx_fd(priv, fd, vaddr);
goto err_frame_format;
}
fapr = dpaa2_get_fapr(vaddr, false);
trap_item = dpaa2_eth_dl_get_trap(priv, fapr);
if (trap_item)
devlink_trap_report(priv->devlink, skb, trap_item->trap_ctx,
&priv->devlink_port, NULL);
consume_skb(skb);
err_frame_format:
percpu_stats = this_cpu_ptr(priv->percpu_stats);
percpu_stats->rx_errors++;
ch->buf_count--;
}
/* Consume all frames pull-dequeued into the store. This is the simplest way to
* make sure we don't accidentally issue another volatile dequeue which would
* overwrite (leak) frames already in the store.
*
* Observance of NAPI budget is not our concern, leaving that to the caller.
*/
static int dpaa2_eth_consume_frames(struct dpaa2_eth_channel *ch,
struct dpaa2_eth_fq **src)
{
struct dpaa2_eth_priv *priv = ch->priv;
struct dpaa2_eth_fq *fq = NULL;
struct dpaa2_dq *dq;
const struct dpaa2_fd *fd;
int cleaned = 0, retries = 0;
int is_last;
do {
dq = dpaa2_io_store_next(ch->store, &is_last);
if (unlikely(!dq)) {
/* If we're here, we *must* have placed a
* volatile dequeue comnmand, so keep reading through
* the store until we get some sort of valid response
* token (either a valid frame or an "empty dequeue")
*/
if (retries++ >= DPAA2_ETH_SWP_BUSY_RETRIES) {
netdev_err_once(priv->net_dev,
"Unable to read a valid dequeue response\n");
return -ETIMEDOUT;
}
continue;
}
fd = dpaa2_dq_fd(dq);
fq = (struct dpaa2_eth_fq *)(uintptr_t)dpaa2_dq_fqd_ctx(dq);
fq->consume(priv, ch, fd, fq);
cleaned++;
retries = 0;
} while (!is_last);
if (!cleaned)
return 0;
fq->stats.frames += cleaned;
ch->stats.frames += cleaned;
ch->stats.frames_per_cdan += cleaned;
/* A dequeue operation only pulls frames from a single queue
* into the store. Return the frame queue as an out param.
*/
if (src)
*src = fq;
return cleaned;
}
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
static int dpaa2_eth_ptp_parse(struct sk_buff *skb,
u8 *msgtype, u8 *twostep, u8 *udp,
u16 *correction_offset,
u16 *origintimestamp_offset)
{
unsigned int ptp_class;
struct ptp_header *hdr;
unsigned int type;
u8 *base;
ptp_class = ptp_classify_raw(skb);
if (ptp_class == PTP_CLASS_NONE)
return -EINVAL;
hdr = ptp_parse_header(skb, ptp_class);
if (!hdr)
return -EINVAL;
*msgtype = ptp_get_msgtype(hdr, ptp_class);
*twostep = hdr->flag_field[0] & 0x2;
type = ptp_class & PTP_CLASS_PMASK;
if (type == PTP_CLASS_IPV4 ||
type == PTP_CLASS_IPV6)
*udp = 1;
else
*udp = 0;
base = skb_mac_header(skb);
*correction_offset = (u8 *)&hdr->correction - base;
*origintimestamp_offset = (u8 *)hdr + sizeof(struct ptp_header) - base;
return 0;
}
/* Configure the egress frame annotation for timestamp update */
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
static void dpaa2_eth_enable_tx_tstamp(struct dpaa2_eth_priv *priv,
struct dpaa2_fd *fd,
void *buf_start,
struct sk_buff *skb)
{
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
struct ptp_tstamp origin_timestamp;
u8 msgtype, twostep, udp;
struct dpaa2_faead *faead;
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
struct dpaa2_fas *fas;
struct timespec64 ts;
u16 offset1, offset2;
u32 ctrl, frc;
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
__le64 *ns;
u8 *data;
/* Mark the egress frame annotation area as valid */
frc = dpaa2_fd_get_frc(fd);
dpaa2_fd_set_frc(fd, frc | DPAA2_FD_FRC_FAEADV);
/* Set hardware annotation size */
ctrl = dpaa2_fd_get_ctrl(fd);
dpaa2_fd_set_ctrl(fd, ctrl | DPAA2_FD_CTRL_ASAL);
/* enable UPD (update prepanded data) bit in FAEAD field of
* hardware frame annotation area
*/
ctrl = DPAA2_FAEAD_A2V | DPAA2_FAEAD_UPDV | DPAA2_FAEAD_UPD;
faead = dpaa2_get_faead(buf_start, true);
faead->ctrl = cpu_to_le32(ctrl);
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
if (skb->cb[0] == TX_TSTAMP_ONESTEP_SYNC) {
if (dpaa2_eth_ptp_parse(skb, &msgtype, &twostep, &udp,
&offset1, &offset2) ||
msgtype != PTP_MSGTYPE_SYNC || twostep) {
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
WARN_ONCE(1, "Bad packet for one-step timestamping\n");
return;
}
/* Mark the frame annotation status as valid */
frc = dpaa2_fd_get_frc(fd);
dpaa2_fd_set_frc(fd, frc | DPAA2_FD_FRC_FASV);
/* Mark the PTP flag for one step timestamping */
fas = dpaa2_get_fas(buf_start, true);
fas->status = cpu_to_le32(DPAA2_FAS_PTP);
dpaa2_ptp->caps.gettime64(&dpaa2_ptp->caps, &ts);
ns = dpaa2_get_ts(buf_start, true);
*ns = cpu_to_le64(timespec64_to_ns(&ts) /
DPAA2_PTP_CLK_PERIOD_NS);
/* Update current time to PTP message originTimestamp field */
ns_to_ptp_tstamp(&origin_timestamp, le64_to_cpup(ns));
data = skb_mac_header(skb);
*(__be16 *)(data + offset2) = htons(origin_timestamp.sec_msb);
*(__be32 *)(data + offset2 + 2) =
htonl(origin_timestamp.sec_lsb);
*(__be32 *)(data + offset2 + 6) = htonl(origin_timestamp.nsec);
dpaa2-eth: Update SINGLE_STEP register access DPAA2 MAC supports 1588 one step timestamping. If this option is enabled then for each transmitted PTP event packet, the 1588 SINGLE_STEP register is accessed to modify the following fields: -offset of the correction field inside the PTP packet -UDP checksum update bit, in case the PTP event packet has UDP encapsulation These values can change any time, because there may be multiple PTP clients connected, that receive various 1588 frame types: - L2 only frame - UDP / Ipv4 - UDP / Ipv6 - other The current implementation uses dpni_set_single_step_cfg to update the SINLGE_STEP register. Using an MC command on the Tx datapath for each transmitted 1588 message introduces high delays, leading to low throughput and consequently to a small number of supported PTP clients. Besides these, the nanosecond correction field from the PTP packet will contain the high delay from the driver which together with the originTimestamp will render timestamp values that are unacceptable in a GM clock implementation. This patch updates the Tx datapath for 1588 messages when single step timestamp is enabled and provides direct access to SINGLE_STEP register, eliminating the overhead caused by the dpni_set_single_step_cfg MC command. MC version >= 10.32 implements this functionality. If the MC version does not have support for returning the single step register base address, the driver will use dpni_set_single_step_cfg command for updates operations. All the delay introduced by dpni_set_single_step_cfg function will be eliminated (if MC version has support for returning the base address of the single step register), improving the egress driver performance for PTP packets when single step timestamping is enabled. Before these changes the maximum throughput for 1588 messages with single step hardware timestamp enabled was around 2000pps. After the updates the throughput increased up to 32.82 Mbps / 46631.02 pps. Signed-off-by: Radu Bulie <radu-andrei.bulie@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-18 22:22:01 +02:00
if (priv->ptp_correction_off == offset1)
return;
priv->dpaa2_set_onestep_params_cb(priv, offset1, udp);
priv->ptp_correction_off = offset1;
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
}
}
void *dpaa2_eth_sgt_get(struct dpaa2_eth_priv *priv)
{
struct dpaa2_eth_sgt_cache *sgt_cache;
void *sgt_buf = NULL;
int sgt_buf_size;
sgt_cache = this_cpu_ptr(priv->sgt_cache);
sgt_buf_size = priv->tx_data_offset +
DPAA2_ETH_SG_ENTRIES_MAX * sizeof(struct dpaa2_sg_entry);
if (sgt_cache->count == 0)
sgt_buf = napi_alloc_frag_align(sgt_buf_size, DPAA2_ETH_TX_BUF_ALIGN);
else
sgt_buf = sgt_cache->buf[--sgt_cache->count];
if (!sgt_buf)
return NULL;
memset(sgt_buf, 0, sgt_buf_size);
return sgt_buf;
}
void dpaa2_eth_sgt_recycle(struct dpaa2_eth_priv *priv, void *sgt_buf)
{
struct dpaa2_eth_sgt_cache *sgt_cache;
sgt_cache = this_cpu_ptr(priv->sgt_cache);
if (sgt_cache->count >= DPAA2_ETH_SGT_CACHE_SIZE)
skb_free_frag(sgt_buf);
else
sgt_cache->buf[sgt_cache->count++] = sgt_buf;
}
/* Create a frame descriptor based on a fragmented skb */
static int dpaa2_eth_build_sg_fd(struct dpaa2_eth_priv *priv,
struct sk_buff *skb,
struct dpaa2_fd *fd,
void **swa_addr)
{
struct device *dev = priv->net_dev->dev.parent;
void *sgt_buf = NULL;
dma_addr_t addr;
int nr_frags = skb_shinfo(skb)->nr_frags;
struct dpaa2_sg_entry *sgt;
int i, err;
int sgt_buf_size;
struct scatterlist *scl, *crt_scl;
int num_sg;
int num_dma_bufs;
struct dpaa2_eth_swa *swa;
/* Create and map scatterlist.
* We don't advertise NETIF_F_FRAGLIST, so skb_to_sgvec() will not have
* to go beyond nr_frags+1.
* Note: We don't support chained scatterlists
*/
if (unlikely(PAGE_SIZE / sizeof(struct scatterlist) < nr_frags + 1))
return -EINVAL;
scl = kmalloc_array(nr_frags + 1, sizeof(struct scatterlist), GFP_ATOMIC);
if (unlikely(!scl))
return -ENOMEM;
sg_init_table(scl, nr_frags + 1);
num_sg = skb_to_sgvec(skb, scl, 0, skb->len);
if (unlikely(num_sg < 0)) {
err = -ENOMEM;
goto dma_map_sg_failed;
}
num_dma_bufs = dma_map_sg(dev, scl, num_sg, DMA_BIDIRECTIONAL);
if (unlikely(!num_dma_bufs)) {
err = -ENOMEM;
goto dma_map_sg_failed;
}
/* Prepare the HW SGT structure */
sgt_buf_size = priv->tx_data_offset +
sizeof(struct dpaa2_sg_entry) * num_dma_bufs;
sgt_buf = dpaa2_eth_sgt_get(priv);
if (unlikely(!sgt_buf)) {
err = -ENOMEM;
goto sgt_buf_alloc_failed;
}
sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
/* Fill in the HW SGT structure.
*
* sgt_buf is zeroed out, so the following fields are implicit
* in all sgt entries:
* - offset is 0
* - format is 'dpaa2_sg_single'
*/
for_each_sg(scl, crt_scl, num_dma_bufs, i) {
dpaa2_sg_set_addr(&sgt[i], sg_dma_address(crt_scl));
dpaa2_sg_set_len(&sgt[i], sg_dma_len(crt_scl));
}
dpaa2_sg_set_final(&sgt[i - 1], true);
/* Store the skb backpointer in the SGT buffer.
* Fit the scatterlist and the number of buffers alongside the
* skb backpointer in the software annotation area. We'll need
* all of them on Tx Conf.
*/
*swa_addr = (void *)sgt_buf;
swa = (struct dpaa2_eth_swa *)sgt_buf;
swa->type = DPAA2_ETH_SWA_SG;
swa->sg.skb = skb;
swa->sg.scl = scl;
swa->sg.num_sg = num_sg;
swa->sg.sgt_size = sgt_buf_size;
/* Separately map the SGT buffer */
addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, addr))) {
err = -ENOMEM;
goto dma_map_single_failed;
}
memset(fd, 0, sizeof(struct dpaa2_fd));
dpaa2_fd_set_offset(fd, priv->tx_data_offset);
dpaa2_fd_set_format(fd, dpaa2_fd_sg);
dpaa2_fd_set_addr(fd, addr);
dpaa2_fd_set_len(fd, skb->len);
dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA);
return 0;
dma_map_single_failed:
dpaa2_eth_sgt_recycle(priv, sgt_buf);
sgt_buf_alloc_failed:
dma_unmap_sg(dev, scl, num_sg, DMA_BIDIRECTIONAL);
dma_map_sg_failed:
kfree(scl);
return err;
}
/* Create a SG frame descriptor based on a linear skb.
*
* This function is used on the Tx path when the skb headroom is not large
* enough for the HW requirements, thus instead of realloc-ing the skb we
* create a SG frame descriptor with only one entry.
*/
static int dpaa2_eth_build_sg_fd_single_buf(struct dpaa2_eth_priv *priv,
struct sk_buff *skb,
struct dpaa2_fd *fd,
void **swa_addr)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpaa2_sg_entry *sgt;
struct dpaa2_eth_swa *swa;
dma_addr_t addr, sgt_addr;
void *sgt_buf = NULL;
int sgt_buf_size;
int err;
/* Prepare the HW SGT structure */
sgt_buf_size = priv->tx_data_offset + sizeof(struct dpaa2_sg_entry);
sgt_buf = dpaa2_eth_sgt_get(priv);
if (unlikely(!sgt_buf))
return -ENOMEM;
sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
addr = dma_map_single(dev, skb->data, skb->len, DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, addr))) {
err = -ENOMEM;
goto data_map_failed;
}
/* Fill in the HW SGT structure */
dpaa2_sg_set_addr(sgt, addr);
dpaa2_sg_set_len(sgt, skb->len);
dpaa2_sg_set_final(sgt, true);
/* Store the skb backpointer in the SGT buffer */
*swa_addr = (void *)sgt_buf;
swa = (struct dpaa2_eth_swa *)sgt_buf;
swa->type = DPAA2_ETH_SWA_SINGLE;
swa->single.skb = skb;
dpaa2-eth: fix the size of the mapped SGT buffer This patch fixes an error condition triggered when the code path which transmits a S/G frame descriptor when the skb's headroom is not enough for DPAA2's needs. We are greated with a splat like the one below when a SGT structure is recycled and that is because even though a dma_unmap is performed on the Tx confirmation path, the unmap is not done with the proper size. [ 714.464927] WARNING: CPU: 13 PID: 0 at drivers/iommu/io-pgtable-arm.c:281 __arm_lpae_map+0x2d4/0x30c (...) [ 714.465343] Call trace: [ 714.465348] __arm_lpae_map+0x2d4/0x30c [ 714.465353] __arm_lpae_map+0x114/0x30c [ 714.465357] __arm_lpae_map+0x114/0x30c [ 714.465362] __arm_lpae_map+0x114/0x30c [ 714.465366] arm_lpae_map+0xf4/0x180 [ 714.465373] arm_smmu_map+0x4c/0xc0 [ 714.465379] __iommu_map+0x100/0x2bc [ 714.465385] iommu_map_atomic+0x20/0x30 [ 714.465391] __iommu_dma_map+0xb0/0x110 [ 714.465397] iommu_dma_map_page+0xb8/0x120 [ 714.465404] dma_map_page_attrs+0x1a8/0x210 [ 714.465413] __dpaa2_eth_tx+0x384/0xbd0 [fsl_dpaa2_eth] [ 714.465421] dpaa2_eth_tx+0x84/0x134 [fsl_dpaa2_eth] [ 714.465427] dev_hard_start_xmit+0x10c/0x2b0 [ 714.465433] sch_direct_xmit+0x1a0/0x550 (...) The dpaa2-eth driver uses an area of software annotations to transmit necessary information from the Tx path to the Tx confirmation one. This SWA structure has a different layout for each kind of frame that we are dealing with: linear, S/G or XDP. The commit referenced was incorrectly setting up the 'sgt_size' field for the S/G type of SWA even though we are dealing with a linear skb here. Fixes: d70446ee1f40 ("dpaa2-eth: send a scatter-gather FD instead of realloc-ing") Reported-by: Daniel Thompson <daniel.thompson@linaro.org> Tested-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/20201211171607.108034-1-ciorneiioana@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-11 19:16:07 +02:00
swa->single.sgt_size = sgt_buf_size;
/* Separately map the SGT buffer */
sgt_addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, sgt_addr))) {
err = -ENOMEM;
goto sgt_map_failed;
}
memset(fd, 0, sizeof(struct dpaa2_fd));
dpaa2_fd_set_offset(fd, priv->tx_data_offset);
dpaa2_fd_set_format(fd, dpaa2_fd_sg);
dpaa2_fd_set_addr(fd, sgt_addr);
dpaa2_fd_set_len(fd, skb->len);
dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA);
return 0;
sgt_map_failed:
dma_unmap_single(dev, addr, skb->len, DMA_BIDIRECTIONAL);
data_map_failed:
dpaa2_eth_sgt_recycle(priv, sgt_buf);
return err;
}
/* Create a frame descriptor based on a linear skb */
static int dpaa2_eth_build_single_fd(struct dpaa2_eth_priv *priv,
struct sk_buff *skb,
struct dpaa2_fd *fd,
void **swa_addr)
{
struct device *dev = priv->net_dev->dev.parent;
u8 *buffer_start, *aligned_start;
struct dpaa2_eth_swa *swa;
dma_addr_t addr;
buffer_start = skb->data - dpaa2_eth_needed_headroom(skb);
aligned_start = PTR_ALIGN(buffer_start - DPAA2_ETH_TX_BUF_ALIGN,
DPAA2_ETH_TX_BUF_ALIGN);
if (aligned_start >= skb->head)
buffer_start = aligned_start;
else
return -ENOMEM;
/* Store a backpointer to the skb at the beginning of the buffer
* (in the private data area) such that we can release it
* on Tx confirm
*/
*swa_addr = (void *)buffer_start;
swa = (struct dpaa2_eth_swa *)buffer_start;
swa->type = DPAA2_ETH_SWA_SINGLE;
swa->single.skb = skb;
addr = dma_map_single(dev, buffer_start,
skb_tail_pointer(skb) - buffer_start,
DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, addr)))
return -ENOMEM;
memset(fd, 0, sizeof(struct dpaa2_fd));
dpaa2_fd_set_addr(fd, addr);
dpaa2_fd_set_offset(fd, (u16)(skb->data - buffer_start));
dpaa2_fd_set_len(fd, skb->len);
dpaa2_fd_set_format(fd, dpaa2_fd_single);
dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA);
return 0;
}
/* FD freeing routine on the Tx path
*
* DMA-unmap and free FD and possibly SGT buffer allocated on Tx. The skb
* back-pointed to is also freed.
* This can be called either from dpaa2_eth_tx_conf() or on the error path of
* dpaa2_eth_tx().
*/
void dpaa2_eth_free_tx_fd(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
struct dpaa2_eth_fq *fq,
const struct dpaa2_fd *fd, bool in_napi)
{
struct device *dev = priv->net_dev->dev.parent;
dma_addr_t fd_addr, sg_addr;
struct sk_buff *skb = NULL;
unsigned char *buffer_start;
struct dpaa2_eth_swa *swa;
u8 fd_format = dpaa2_fd_get_format(fd);
u32 fd_len = dpaa2_fd_get_len(fd);
struct dpaa2_sg_entry *sgt;
int should_free_skb = 1;
dpaa2-eth: retrieve the virtual address before dma_unmap The TSO header was DMA unmapped before the virtual address was retrieved and then used to free the buffer. This meant that we were actually removing the DMA map and then trying to search for it to help in retrieving the virtual address. This lead to a invalid virtual address being used in the kfree call. Fix this by calling dpaa2_iova_to_virt() prior to the dma_unmap call. [ 487.231819] Unable to handle kernel paging request at virtual address fffffd9807000008 (...) [ 487.354061] Hardware name: SolidRun LX2160A Honeycomb (DT) [ 487.359535] pstate: a0400005 (NzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 487.366485] pc : kfree+0xac/0x304 [ 487.369799] lr : kfree+0x204/0x304 [ 487.373191] sp : ffff80000c4eb120 [ 487.376493] x29: ffff80000c4eb120 x28: ffff662240c46400 x27: 0000000000000001 [ 487.383621] x26: 0000000000000001 x25: ffff662246da0cc0 x24: ffff66224af78000 [ 487.390748] x23: ffffad184f4ce008 x22: ffffad1850185000 x21: ffffad1838d13cec [ 487.397874] x20: ffff6601c0000000 x19: fffffd9807000000 x18: 0000000000000000 [ 487.405000] x17: ffffb910cdc49000 x16: ffffad184d7d9080 x15: 0000000000004000 [ 487.412126] x14: 0000000000000008 x13: 000000000000ffff x12: 0000000000000000 [ 487.419252] x11: 0000000000000004 x10: 0000000000000001 x9 : ffffad184d7d927c [ 487.426379] x8 : 0000000000000000 x7 : 0000000ffffffd1d x6 : ffff662240a94900 [ 487.433505] x5 : 0000000000000003 x4 : 0000000000000009 x3 : ffffad184f4ce008 [ 487.440632] x2 : ffff662243eec000 x1 : 0000000100000100 x0 : fffffc0000000000 [ 487.447758] Call trace: [ 487.450194] kfree+0xac/0x304 [ 487.453151] dpaa2_eth_free_tx_fd.isra.0+0x33c/0x3e0 [fsl_dpaa2_eth] [ 487.459507] dpaa2_eth_tx_conf+0x100/0x2e0 [fsl_dpaa2_eth] [ 487.464989] dpaa2_eth_poll+0xdc/0x380 [fsl_dpaa2_eth] Fixes: 3dc709e0cd47 ("dpaa2-eth: add support for software TSO") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215886 Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 15:52:49 +03:00
void *tso_hdr;
int i;
fd_addr = dpaa2_fd_get_addr(fd);
buffer_start = dpaa2_iova_to_virt(priv->iommu_domain, fd_addr);
swa = (struct dpaa2_eth_swa *)buffer_start;
if (fd_format == dpaa2_fd_single) {
if (swa->type == DPAA2_ETH_SWA_SINGLE) {
skb = swa->single.skb;
/* Accessing the skb buffer is safe before dma unmap,
* because we didn't map the actual skb shell.
*/
dma_unmap_single(dev, fd_addr,
skb_tail_pointer(skb) - buffer_start,
DMA_BIDIRECTIONAL);
} else {
WARN_ONCE(swa->type != DPAA2_ETH_SWA_XDP, "Wrong SWA type");
dma_unmap_single(dev, fd_addr, swa->xdp.dma_size,
DMA_BIDIRECTIONAL);
}
} else if (fd_format == dpaa2_fd_sg) {
if (swa->type == DPAA2_ETH_SWA_SG) {
skb = swa->sg.skb;
/* Unmap the scatterlist */
dma_unmap_sg(dev, swa->sg.scl, swa->sg.num_sg,
DMA_BIDIRECTIONAL);
kfree(swa->sg.scl);
/* Unmap the SGT buffer */
dma_unmap_single(dev, fd_addr, swa->sg.sgt_size,
DMA_BIDIRECTIONAL);
} else if (swa->type == DPAA2_ETH_SWA_SW_TSO) {
skb = swa->tso.skb;
sgt = (struct dpaa2_sg_entry *)(buffer_start +
priv->tx_data_offset);
/* Unmap the SGT buffer */
dma_unmap_single(dev, fd_addr, swa->tso.sgt_size,
DMA_BIDIRECTIONAL);
/* Unmap and free the header */
dpaa2-eth: retrieve the virtual address before dma_unmap The TSO header was DMA unmapped before the virtual address was retrieved and then used to free the buffer. This meant that we were actually removing the DMA map and then trying to search for it to help in retrieving the virtual address. This lead to a invalid virtual address being used in the kfree call. Fix this by calling dpaa2_iova_to_virt() prior to the dma_unmap call. [ 487.231819] Unable to handle kernel paging request at virtual address fffffd9807000008 (...) [ 487.354061] Hardware name: SolidRun LX2160A Honeycomb (DT) [ 487.359535] pstate: a0400005 (NzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 487.366485] pc : kfree+0xac/0x304 [ 487.369799] lr : kfree+0x204/0x304 [ 487.373191] sp : ffff80000c4eb120 [ 487.376493] x29: ffff80000c4eb120 x28: ffff662240c46400 x27: 0000000000000001 [ 487.383621] x26: 0000000000000001 x25: ffff662246da0cc0 x24: ffff66224af78000 [ 487.390748] x23: ffffad184f4ce008 x22: ffffad1850185000 x21: ffffad1838d13cec [ 487.397874] x20: ffff6601c0000000 x19: fffffd9807000000 x18: 0000000000000000 [ 487.405000] x17: ffffb910cdc49000 x16: ffffad184d7d9080 x15: 0000000000004000 [ 487.412126] x14: 0000000000000008 x13: 000000000000ffff x12: 0000000000000000 [ 487.419252] x11: 0000000000000004 x10: 0000000000000001 x9 : ffffad184d7d927c [ 487.426379] x8 : 0000000000000000 x7 : 0000000ffffffd1d x6 : ffff662240a94900 [ 487.433505] x5 : 0000000000000003 x4 : 0000000000000009 x3 : ffffad184f4ce008 [ 487.440632] x2 : ffff662243eec000 x1 : 0000000100000100 x0 : fffffc0000000000 [ 487.447758] Call trace: [ 487.450194] kfree+0xac/0x304 [ 487.453151] dpaa2_eth_free_tx_fd.isra.0+0x33c/0x3e0 [fsl_dpaa2_eth] [ 487.459507] dpaa2_eth_tx_conf+0x100/0x2e0 [fsl_dpaa2_eth] [ 487.464989] dpaa2_eth_poll+0xdc/0x380 [fsl_dpaa2_eth] Fixes: 3dc709e0cd47 ("dpaa2-eth: add support for software TSO") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215886 Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 15:52:49 +03:00
tso_hdr = dpaa2_iova_to_virt(priv->iommu_domain, dpaa2_sg_get_addr(sgt));
dma_unmap_single(dev, dpaa2_sg_get_addr(sgt), TSO_HEADER_SIZE,
DMA_TO_DEVICE);
dpaa2-eth: retrieve the virtual address before dma_unmap The TSO header was DMA unmapped before the virtual address was retrieved and then used to free the buffer. This meant that we were actually removing the DMA map and then trying to search for it to help in retrieving the virtual address. This lead to a invalid virtual address being used in the kfree call. Fix this by calling dpaa2_iova_to_virt() prior to the dma_unmap call. [ 487.231819] Unable to handle kernel paging request at virtual address fffffd9807000008 (...) [ 487.354061] Hardware name: SolidRun LX2160A Honeycomb (DT) [ 487.359535] pstate: a0400005 (NzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 487.366485] pc : kfree+0xac/0x304 [ 487.369799] lr : kfree+0x204/0x304 [ 487.373191] sp : ffff80000c4eb120 [ 487.376493] x29: ffff80000c4eb120 x28: ffff662240c46400 x27: 0000000000000001 [ 487.383621] x26: 0000000000000001 x25: ffff662246da0cc0 x24: ffff66224af78000 [ 487.390748] x23: ffffad184f4ce008 x22: ffffad1850185000 x21: ffffad1838d13cec [ 487.397874] x20: ffff6601c0000000 x19: fffffd9807000000 x18: 0000000000000000 [ 487.405000] x17: ffffb910cdc49000 x16: ffffad184d7d9080 x15: 0000000000004000 [ 487.412126] x14: 0000000000000008 x13: 000000000000ffff x12: 0000000000000000 [ 487.419252] x11: 0000000000000004 x10: 0000000000000001 x9 : ffffad184d7d927c [ 487.426379] x8 : 0000000000000000 x7 : 0000000ffffffd1d x6 : ffff662240a94900 [ 487.433505] x5 : 0000000000000003 x4 : 0000000000000009 x3 : ffffad184f4ce008 [ 487.440632] x2 : ffff662243eec000 x1 : 0000000100000100 x0 : fffffc0000000000 [ 487.447758] Call trace: [ 487.450194] kfree+0xac/0x304 [ 487.453151] dpaa2_eth_free_tx_fd.isra.0+0x33c/0x3e0 [fsl_dpaa2_eth] [ 487.459507] dpaa2_eth_tx_conf+0x100/0x2e0 [fsl_dpaa2_eth] [ 487.464989] dpaa2_eth_poll+0xdc/0x380 [fsl_dpaa2_eth] Fixes: 3dc709e0cd47 ("dpaa2-eth: add support for software TSO") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215886 Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 15:52:49 +03:00
kfree(tso_hdr);
/* Unmap the other SG entries for the data */
for (i = 1; i < swa->tso.num_sg; i++)
dma_unmap_single(dev, dpaa2_sg_get_addr(&sgt[i]),
dpaa2_sg_get_len(&sgt[i]), DMA_TO_DEVICE);
if (!swa->tso.is_last_fd)
should_free_skb = 0;
} else if (swa->type == DPAA2_ETH_SWA_XSK) {
/* Unmap the SGT Buffer */
dma_unmap_single(dev, fd_addr, swa->xsk.sgt_size,
DMA_BIDIRECTIONAL);
} else {
skb = swa->single.skb;
/* Unmap the SGT Buffer */
dma_unmap_single(dev, fd_addr, swa->single.sgt_size,
DMA_BIDIRECTIONAL);
sgt = (struct dpaa2_sg_entry *)(buffer_start +
priv->tx_data_offset);
sg_addr = dpaa2_sg_get_addr(sgt);
dma_unmap_single(dev, sg_addr, skb->len, DMA_BIDIRECTIONAL);
}
} else {
netdev_dbg(priv->net_dev, "Invalid FD format\n");
return;
}
if (swa->type == DPAA2_ETH_SWA_XSK) {
ch->xsk_tx_pkts_sent++;
dpaa2_eth_sgt_recycle(priv, buffer_start);
return;
}
if (swa->type != DPAA2_ETH_SWA_XDP && in_napi) {
fq->dq_frames++;
fq->dq_bytes += fd_len;
}
if (swa->type == DPAA2_ETH_SWA_XDP) {
xdp_return_frame(swa->xdp.xdpf);
return;
}
/* Get the timestamp value */
if (swa->type != DPAA2_ETH_SWA_SW_TSO) {
if (skb->cb[0] == TX_TSTAMP) {
struct skb_shared_hwtstamps shhwtstamps;
__le64 *ts = dpaa2_get_ts(buffer_start, true);
u64 ns;
memset(&shhwtstamps, 0, sizeof(shhwtstamps));
ns = DPAA2_PTP_CLK_PERIOD_NS * le64_to_cpup(ts);
shhwtstamps.hwtstamp = ns_to_ktime(ns);
skb_tstamp_tx(skb, &shhwtstamps);
} else if (skb->cb[0] == TX_TSTAMP_ONESTEP_SYNC) {
mutex_unlock(&priv->onestep_tstamp_lock);
}
}
/* Free SGT buffer allocated on tx */
if (fd_format != dpaa2_fd_single)
dpaa2_eth_sgt_recycle(priv, buffer_start);
/* Move on with skb release. If we are just confirming multiple FDs
* from the same TSO skb then only the last one will need to free the
* skb.
*/
if (should_free_skb)
napi_consume_skb(skb, in_napi);
}
static int dpaa2_eth_build_gso_fd(struct dpaa2_eth_priv *priv,
struct sk_buff *skb, struct dpaa2_fd *fd,
int *num_fds, u32 *total_fds_len)
{
struct device *dev = priv->net_dev->dev.parent;
int hdr_len, total_len, data_left, fd_len;
int num_sge, err, i, sgt_buf_size;
struct dpaa2_fd *fd_start = fd;
struct dpaa2_sg_entry *sgt;
struct dpaa2_eth_swa *swa;
dma_addr_t sgt_addr, addr;
dma_addr_t tso_hdr_dma;
unsigned int index = 0;
struct tso_t tso;
char *tso_hdr;
void *sgt_buf;
/* Initialize the TSO handler, and prepare the first payload */
hdr_len = tso_start(skb, &tso);
*total_fds_len = 0;
total_len = skb->len - hdr_len;
while (total_len > 0) {
/* Prepare the HW SGT structure for this frame */
sgt_buf = dpaa2_eth_sgt_get(priv);
if (unlikely(!sgt_buf)) {
netdev_err(priv->net_dev, "dpaa2_eth_sgt_get() failed\n");
err = -ENOMEM;
goto err_sgt_get;
}
sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
/* Determine the data length of this frame */
data_left = min_t(int, skb_shinfo(skb)->gso_size, total_len);
total_len -= data_left;
fd_len = data_left + hdr_len;
/* Prepare packet headers: MAC + IP + TCP */
tso_hdr = kmalloc(TSO_HEADER_SIZE, GFP_ATOMIC);
if (!tso_hdr) {
err = -ENOMEM;
goto err_alloc_tso_hdr;
}
tso_build_hdr(skb, tso_hdr, &tso, data_left, total_len == 0);
tso_hdr_dma = dma_map_single(dev, tso_hdr, TSO_HEADER_SIZE, DMA_TO_DEVICE);
if (dma_mapping_error(dev, tso_hdr_dma)) {
netdev_err(priv->net_dev, "dma_map_single(tso_hdr) failed\n");
err = -ENOMEM;
goto err_map_tso_hdr;
}
/* Setup the SG entry for the header */
dpaa2_sg_set_addr(sgt, tso_hdr_dma);
dpaa2_sg_set_len(sgt, hdr_len);
dpaa2_sg_set_final(sgt, data_left <= 0);
/* Compose the SG entries for each fragment of data */
num_sge = 1;
while (data_left > 0) {
int size;
/* Move to the next SG entry */
sgt++;
size = min_t(int, tso.size, data_left);
addr = dma_map_single(dev, tso.data, size, DMA_TO_DEVICE);
if (dma_mapping_error(dev, addr)) {
netdev_err(priv->net_dev, "dma_map_single(tso.data) failed\n");
err = -ENOMEM;
goto err_map_data;
}
dpaa2_sg_set_addr(sgt, addr);
dpaa2_sg_set_len(sgt, size);
dpaa2_sg_set_final(sgt, size == data_left);
num_sge++;
/* Build the data for the __next__ fragment */
data_left -= size;
tso_build_data(skb, &tso, size);
}
/* Store the skb backpointer in the SGT buffer */
sgt_buf_size = priv->tx_data_offset + num_sge * sizeof(struct dpaa2_sg_entry);
swa = (struct dpaa2_eth_swa *)sgt_buf;
swa->type = DPAA2_ETH_SWA_SW_TSO;
swa->tso.skb = skb;
swa->tso.num_sg = num_sge;
swa->tso.sgt_size = sgt_buf_size;
swa->tso.is_last_fd = total_len == 0 ? 1 : 0;
/* Separately map the SGT buffer */
sgt_addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, sgt_addr))) {
netdev_err(priv->net_dev, "dma_map_single(sgt_buf) failed\n");
err = -ENOMEM;
goto err_map_sgt;
}
/* Setup the frame descriptor */
memset(fd, 0, sizeof(struct dpaa2_fd));
dpaa2_fd_set_offset(fd, priv->tx_data_offset);
dpaa2_fd_set_format(fd, dpaa2_fd_sg);
dpaa2_fd_set_addr(fd, sgt_addr);
dpaa2_fd_set_len(fd, fd_len);
dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA);
*total_fds_len += fd_len;
/* Advance to the next frame descriptor */
fd++;
index++;
}
*num_fds = index;
return 0;
err_map_sgt:
err_map_data:
/* Unmap all the data S/G entries for the current FD */
sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
for (i = 1; i < num_sge; i++)
dma_unmap_single(dev, dpaa2_sg_get_addr(&sgt[i]),
dpaa2_sg_get_len(&sgt[i]), DMA_TO_DEVICE);
/* Unmap the header entry */
dma_unmap_single(dev, tso_hdr_dma, TSO_HEADER_SIZE, DMA_TO_DEVICE);
err_map_tso_hdr:
kfree(tso_hdr);
err_alloc_tso_hdr:
dpaa2_eth_sgt_recycle(priv, sgt_buf);
err_sgt_get:
/* Free all the other FDs that were already fully created */
for (i = 0; i < index; i++)
dpaa2_eth_free_tx_fd(priv, NULL, NULL, &fd_start[i], false);
return err;
}
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
static netdev_tx_t __dpaa2_eth_tx(struct sk_buff *skb,
struct net_device *net_dev)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
int total_enqueued = 0, retries = 0, enqueued;
struct dpaa2_eth_drv_stats *percpu_extras;
struct rtnl_link_stats64 *percpu_stats;
unsigned int needed_headroom;
int num_fds = 1, max_retries;
struct dpaa2_eth_fq *fq;
struct netdev_queue *nq;
struct dpaa2_fd *fd;
u16 queue_mapping;
void *swa = NULL;
u8 prio = 0;
int err, i;
u32 fd_len;
percpu_stats = this_cpu_ptr(priv->percpu_stats);
percpu_extras = this_cpu_ptr(priv->percpu_extras);
fd = (this_cpu_ptr(priv->fd))->array;
needed_headroom = dpaa2_eth_needed_headroom(skb);
/* We'll be holding a back-reference to the skb until Tx Confirmation;
* we don't want that overwritten by a concurrent Tx with a cloned skb.
*/
skb = skb_unshare(skb, GFP_ATOMIC);
if (unlikely(!skb)) {
/* skb_unshare() has already freed the skb */
percpu_stats->tx_dropped++;
return NETDEV_TX_OK;
}
/* Setup the FD fields */
if (skb_is_gso(skb)) {
err = dpaa2_eth_build_gso_fd(priv, skb, fd, &num_fds, &fd_len);
percpu_extras->tx_sg_frames += num_fds;
percpu_extras->tx_sg_bytes += fd_len;
percpu_extras->tx_tso_frames += num_fds;
percpu_extras->tx_tso_bytes += fd_len;
} else if (skb_is_nonlinear(skb)) {
err = dpaa2_eth_build_sg_fd(priv, skb, fd, &swa);
percpu_extras->tx_sg_frames++;
percpu_extras->tx_sg_bytes += skb->len;
fd_len = dpaa2_fd_get_len(fd);
} else if (skb_headroom(skb) < needed_headroom) {
err = dpaa2_eth_build_sg_fd_single_buf(priv, skb, fd, &swa);
percpu_extras->tx_sg_frames++;
percpu_extras->tx_sg_bytes += skb->len;
percpu_extras->tx_converted_sg_frames++;
percpu_extras->tx_converted_sg_bytes += skb->len;
fd_len = dpaa2_fd_get_len(fd);
} else {
err = dpaa2_eth_build_single_fd(priv, skb, fd, &swa);
fd_len = dpaa2_fd_get_len(fd);
}
if (unlikely(err)) {
percpu_stats->tx_dropped++;
goto err_build_fd;
}
if (swa && skb->cb[0])
dpaa2_eth_enable_tx_tstamp(priv, fd, swa, skb);
/* Tracing point */
for (i = 0; i < num_fds; i++)
trace_dpaa2_tx_fd(net_dev, &fd[i]);
/* TxConf FQ selection relies on queue id from the stack.
* In case of a forwarded frame from another DPNI interface, we choose
* a queue affined to the same core that processed the Rx frame
*/
queue_mapping = skb_get_queue_mapping(skb);
if (net_dev->num_tc) {
prio = netdev_txq_to_tc(net_dev, queue_mapping);
/* Hardware interprets priority level 0 as being the highest,
* so we need to do a reverse mapping to the netdev tc index
*/
prio = net_dev->num_tc - prio - 1;
/* We have only one FQ array entry for all Tx hardware queues
* with the same flow id (but different priority levels)
*/
queue_mapping %= dpaa2_eth_queue_count(priv);
}
fq = &priv->fq[queue_mapping];
nq = netdev_get_tx_queue(net_dev, queue_mapping);
netdev_tx_sent_queue(nq, fd_len);
/* Everything that happens after this enqueues might race with
* the Tx confirmation callback for this frame
*/
max_retries = num_fds * DPAA2_ETH_ENQUEUE_RETRIES;
while (total_enqueued < num_fds && retries < max_retries) {
err = priv->enqueue(priv, fq, &fd[total_enqueued],
prio, num_fds - total_enqueued, &enqueued);
if (err == -EBUSY) {
retries++;
continue;
}
total_enqueued += enqueued;
}
percpu_extras->tx_portal_busy += retries;
if (unlikely(err < 0)) {
percpu_stats->tx_errors++;
/* Clean up everything, including freeing the skb */
dpaa2_eth_free_tx_fd(priv, NULL, fq, fd, false);
netdev_tx_completed_queue(nq, 1, fd_len);
} else {
percpu_stats->tx_packets += total_enqueued;
percpu_stats->tx_bytes += fd_len;
}
return NETDEV_TX_OK;
err_build_fd:
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
static void dpaa2_eth_tx_onestep_tstamp(struct work_struct *work)
{
struct dpaa2_eth_priv *priv = container_of(work, struct dpaa2_eth_priv,
tx_onestep_tstamp);
struct sk_buff *skb;
while (true) {
skb = skb_dequeue(&priv->tx_skbs);
if (!skb)
return;
/* Lock just before TX one-step timestamping packet,
* and release the lock in dpaa2_eth_free_tx_fd when
* confirm the packet has been sent on hardware, or
* when clean up during transmit failure.
*/
mutex_lock(&priv->onestep_tstamp_lock);
__dpaa2_eth_tx(skb, priv->net_dev);
}
}
static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
u8 msgtype, twostep, udp;
u16 offset1, offset2;
/* Utilize skb->cb[0] for timestamping request per skb */
skb->cb[0] = 0;
if ((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) && dpaa2_ptp) {
if (priv->tx_tstamp_type == HWTSTAMP_TX_ON)
skb->cb[0] = TX_TSTAMP;
else if (priv->tx_tstamp_type == HWTSTAMP_TX_ONESTEP_SYNC)
skb->cb[0] = TX_TSTAMP_ONESTEP_SYNC;
}
/* TX for one-step timestamping PTP Sync packet */
if (skb->cb[0] == TX_TSTAMP_ONESTEP_SYNC) {
if (!dpaa2_eth_ptp_parse(skb, &msgtype, &twostep, &udp,
&offset1, &offset2))
if (msgtype == PTP_MSGTYPE_SYNC && twostep == 0) {
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
skb_queue_tail(&priv->tx_skbs, skb);
queue_work(priv->dpaa2_ptp_wq,
&priv->tx_onestep_tstamp);
return NETDEV_TX_OK;
}
/* Use two-step timestamping if not one-step timestamping
* PTP Sync packet
*/
skb->cb[0] = TX_TSTAMP;
}
/* TX for other packets */
return __dpaa2_eth_tx(skb, net_dev);
}
/* Tx confirmation frame processing routine */
static void dpaa2_eth_tx_conf(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch,
const struct dpaa2_fd *fd,
struct dpaa2_eth_fq *fq)
{
struct rtnl_link_stats64 *percpu_stats;
struct dpaa2_eth_drv_stats *percpu_extras;
u32 fd_len = dpaa2_fd_get_len(fd);
u32 fd_errors;
/* Tracing point */
trace_dpaa2_tx_conf_fd(priv->net_dev, fd);
percpu_extras = this_cpu_ptr(priv->percpu_extras);
percpu_extras->tx_conf_frames++;
percpu_extras->tx_conf_bytes += fd_len;
ch->stats.bytes_per_cdan += fd_len;
/* Check frame errors in the FD field */
fd_errors = dpaa2_fd_get_ctrl(fd) & DPAA2_FD_TX_ERR_MASK;
dpaa2_eth_free_tx_fd(priv, ch, fq, fd, true);
if (likely(!fd_errors))
return;
if (net_ratelimit())
netdev_dbg(priv->net_dev, "TX frame FD error: 0x%08x\n",
fd_errors);
percpu_stats = this_cpu_ptr(priv->percpu_stats);
/* Tx-conf logically pertains to the egress path. */
percpu_stats->tx_errors++;
}
static int dpaa2_eth_set_rx_vlan_filtering(struct dpaa2_eth_priv *priv,
bool enable)
{
int err;
err = dpni_enable_vlan_filter(priv->mc_io, 0, priv->mc_token, enable);
if (err) {
netdev_err(priv->net_dev,
"dpni_enable_vlan_filter failed\n");
return err;
}
return 0;
}
static int dpaa2_eth_set_rx_csum(struct dpaa2_eth_priv *priv, bool enable)
{
int err;
err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
DPNI_OFF_RX_L3_CSUM, enable);
if (err) {
netdev_err(priv->net_dev,
"dpni_set_offload(RX_L3_CSUM) failed\n");
return err;
}
err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
DPNI_OFF_RX_L4_CSUM, enable);
if (err) {
netdev_err(priv->net_dev,
"dpni_set_offload(RX_L4_CSUM) failed\n");
return err;
}
return 0;
}
static int dpaa2_eth_set_tx_csum(struct dpaa2_eth_priv *priv, bool enable)
{
int err;
err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
DPNI_OFF_TX_L3_CSUM, enable);
if (err) {
netdev_err(priv->net_dev, "dpni_set_offload(TX_L3_CSUM) failed\n");
return err;
}
err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
DPNI_OFF_TX_L4_CSUM, enable);
if (err) {
netdev_err(priv->net_dev, "dpni_set_offload(TX_L4_CSUM) failed\n");
return err;
}
return 0;
}
/* Perform a single release command to add buffers
* to the specified buffer pool
*/
static int dpaa2_eth_add_bufs(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch)
{
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
struct xdp_buff *xdp_buffs[DPAA2_ETH_BUFS_PER_CMD];
struct device *dev = priv->net_dev->dev.parent;
u64 buf_array[DPAA2_ETH_BUFS_PER_CMD];
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
struct dpaa2_eth_swa *swa;
struct page *page;
dma_addr_t addr;
int retries = 0;
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
int i = 0, err;
u32 batch;
/* Allocate buffers visible to WRIOP */
if (!ch->xsk_zc) {
for (i = 0; i < DPAA2_ETH_BUFS_PER_CMD; i++) {
/* Also allocate skb shared info and alignment padding.
* There is one page for each Rx buffer. WRIOP sees
* the entire page except for a tailroom reserved for
* skb shared info
*/
page = dev_alloc_pages(0);
if (!page)
goto err_alloc;
addr = dma_map_page(dev, page, 0, priv->rx_buf_size,
DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, addr)))
goto err_map;
buf_array[i] = addr;
/* tracing point */
trace_dpaa2_eth_buf_seed(priv->net_dev,
page_address(page),
DPAA2_ETH_RX_BUF_RAW_SIZE,
addr, priv->rx_buf_size,
ch->bp->bpid);
}
} else if (xsk_buff_can_alloc(ch->xsk_pool, DPAA2_ETH_BUFS_PER_CMD)) {
/* Allocate XSK buffers for AF_XDP fast path in batches
* of DPAA2_ETH_BUFS_PER_CMD. Bail out if the UMEM cannot
* provide enough buffers at the moment
*/
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
batch = xsk_buff_alloc_batch(ch->xsk_pool, xdp_buffs,
DPAA2_ETH_BUFS_PER_CMD);
if (!batch)
goto err_alloc;
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
for (i = 0; i < batch; i++) {
swa = (struct dpaa2_eth_swa *)(xdp_buffs[i]->data_hard_start +
DPAA2_ETH_RX_HWA_SIZE);
swa->xsk.xdp_buff = xdp_buffs[i];
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
addr = xsk_buff_xdp_get_frame_dma(xdp_buffs[i]);
if (unlikely(dma_mapping_error(dev, addr)))
goto err_map;
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
buf_array[i] = addr;
trace_dpaa2_xsk_buf_seed(priv->net_dev,
xdp_buffs[i]->data_hard_start,
DPAA2_ETH_RX_BUF_RAW_SIZE,
addr, priv->rx_buf_size,
ch->bp->bpid);
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
}
}
release_bufs:
/* In case the portal is busy, retry until successful */
while ((err = dpaa2_io_service_release(ch->dpio, ch->bp->bpid,
buf_array, i)) == -EBUSY) {
if (retries++ >= DPAA2_ETH_SWP_BUSY_RETRIES)
break;
cpu_relax();
}
/* If release command failed, clean up and bail out;
* not much else we can do about it
*/
if (err) {
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
dpaa2_eth_free_bufs(priv, buf_array, i, ch->xsk_zc);
return 0;
}
return i;
err_map:
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
if (!ch->xsk_zc) {
__free_pages(page, 0);
} else {
for (; i < batch; i++)
xsk_buff_free(xdp_buffs[i]);
}
err_alloc:
/* If we managed to allocate at least some buffers,
* release them to hardware
*/
if (i)
goto release_bufs;
return 0;
}
static int dpaa2_eth_seed_pool(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch)
{
int i;
int new_count;
for (i = 0; i < DPAA2_ETH_NUM_BUFS; i += DPAA2_ETH_BUFS_PER_CMD) {
new_count = dpaa2_eth_add_bufs(priv, ch);
ch->buf_count += new_count;
if (new_count < DPAA2_ETH_BUFS_PER_CMD)
return -ENOMEM;
}
return 0;
}
static void dpaa2_eth_seed_pools(struct dpaa2_eth_priv *priv)
{
struct net_device *net_dev = priv->net_dev;
struct dpaa2_eth_channel *channel;
int i, err = 0;
for (i = 0; i < priv->num_channels; i++) {
channel = priv->channel[i];
err = dpaa2_eth_seed_pool(priv, channel);
/* Not much to do; the buffer pool, though not filled up,
* may still contain some buffers which would enable us
* to limp on.
*/
if (err)
netdev_err(net_dev, "Buffer seeding failed for DPBP %d (bpid=%d)\n",
channel->bp->dev->obj_desc.id,
channel->bp->bpid);
}
}
drivers/net/ethernet: clean up mis-targeted comments As part of the W=1 cleanups for ethernet, a million [*] driver comments had to be cleaned up to get the W=1 compilation to succeed. This change finally makes the drivers/net/ethernet tree compile with W=1 set on the command line. NOTE: The kernel uses kdoc style (see Documentation/process/kernel-doc.rst) when documenting code, not doxygen or other styles. After this patch the x86_64 build has no warnings from W=1, however scripts/kernel-doc says there are 1545 more warnings in source files, that I need to develop a script to fix in a followup patch. The errors fixed here are all kdoc of a few classes, with a few outliers: In file included from drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c:10: drivers/net/ethernet/qlogic/netxen/netxen_nic.h:1193:18: warning: ‘FW_DUMP_LEVELS’ defined but not used [-Wunused-const-variable=] 1193 | static const u32 FW_DUMP_LEVELS[] = { 0x3, 0x7, 0xf, 0x1f, 0x3f, 0x7f, 0xff }; | ^~~~~~~~~~~~~~ ... repeats 4 times... drivers/net/ethernet/sun/cassini.c:2084:24: warning: suggest braces around empty body in an ‘else’ statement [-Wempty-body] 2084 | RX_USED_ADD(page, i); drivers/net/ethernet/natsemi/ns83820.c: In function ‘phy_intr’: drivers/net/ethernet/natsemi/ns83820.c:603:6: warning: variable ‘tbisr’ set but not used [-Wunused-but-set-variable] 603 | u32 tbisr, tanar, tanlpar; | ^~~~~ drivers/net/ethernet/natsemi/ns83820.c: In function ‘ns83820_get_link_ksettings’: drivers/net/ethernet/natsemi/ns83820.c:1207:11: warning: variable ‘tanar’ set but not used [-Wunused-but-set-variable] 1207 | u32 cfg, tanar, tbicr; | ^~~~~ drivers/net/ethernet/packetengines/yellowfin.c:1063:18: warning: variable ‘yf_size’ set but not used [-Wunused-but-set-variable] 1063 | int data_size, yf_size; | ^~~~~~~ Normal kdoc fixes: warning: Function parameter or member 'x' not described in 'y' warning: Excess function parameter 'x' description in 'y' warning: Cannot understand <string> on line <NNN> - I thought it was a doc line [*] - ok it wasn't quite a million, but it felt like it. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-25 15:24:45 -07:00
/*
* Drain the specified number of buffers from one of the DPNI's private buffer
* pools.
* @count must not exceeed DPAA2_ETH_BUFS_PER_CMD
*/
static void dpaa2_eth_drain_bufs(struct dpaa2_eth_priv *priv, int bpid,
int count)
{
u64 buf_array[DPAA2_ETH_BUFS_PER_CMD];
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
bool xsk_zc = false;
int retries = 0;
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
int i, ret;
for (i = 0; i < priv->num_channels; i++)
if (priv->channel[i]->bp->bpid == bpid)
xsk_zc = priv->channel[i]->xsk_zc;
do {
ret = dpaa2_io_service_acquire(NULL, bpid, buf_array, count);
if (ret < 0) {
if (ret == -EBUSY &&
retries++ < DPAA2_ETH_SWP_BUSY_RETRIES)
continue;
netdev_err(priv->net_dev, "dpaa2_io_service_acquire() failed\n");
return;
}
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
dpaa2_eth_free_bufs(priv, buf_array, ret, xsk_zc);
retries = 0;
} while (ret);
}
static void dpaa2_eth_drain_pool(struct dpaa2_eth_priv *priv, int bpid)
{
int i;
/* Drain the buffer pool */
dpaa2_eth_drain_bufs(priv, bpid, DPAA2_ETH_BUFS_PER_CMD);
dpaa2_eth_drain_bufs(priv, bpid, 1);
/* Setup to zero the buffer count of all channels which were
* using this buffer pool.
*/
for (i = 0; i < priv->num_channels; i++)
if (priv->channel[i]->bp->bpid == bpid)
priv->channel[i]->buf_count = 0;
}
static void dpaa2_eth_drain_pools(struct dpaa2_eth_priv *priv)
{
int i;
for (i = 0; i < priv->num_bps; i++)
dpaa2_eth_drain_pool(priv, priv->bp[i]->bpid);
}
/* Function is called from softirq context only, so we don't need to guard
* the access to percpu count
*/
static int dpaa2_eth_refill_pool(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *ch)
{
int new_count;
if (likely(ch->buf_count >= DPAA2_ETH_REFILL_THRESH))
return 0;
do {
new_count = dpaa2_eth_add_bufs(priv, ch);
if (unlikely(!new_count)) {
/* Out of memory; abort for now, we'll try later on */
break;
}
ch->buf_count += new_count;
} while (ch->buf_count < DPAA2_ETH_NUM_BUFS);
if (unlikely(ch->buf_count < DPAA2_ETH_NUM_BUFS))
return -ENOMEM;
return 0;
}
static void dpaa2_eth_sgt_cache_drain(struct dpaa2_eth_priv *priv)
{
struct dpaa2_eth_sgt_cache *sgt_cache;
u16 count;
int k, i;
for_each_possible_cpu(k) {
sgt_cache = per_cpu_ptr(priv->sgt_cache, k);
count = sgt_cache->count;
for (i = 0; i < count; i++)
skb_free_frag(sgt_cache->buf[i]);
sgt_cache->count = 0;
}
}
static int dpaa2_eth_pull_channel(struct dpaa2_eth_channel *ch)
{
int err;
int dequeues = -1;
/* Retry while portal is busy */
do {
err = dpaa2_io_service_pull_channel(ch->dpio, ch->ch_id,
ch->store);
dequeues++;
cpu_relax();
} while (err == -EBUSY && dequeues < DPAA2_ETH_SWP_BUSY_RETRIES);
ch->stats.dequeue_portal_busy += dequeues;
if (unlikely(err))
ch->stats.pull_err++;
return err;
}
/* NAPI poll routine
*
* Frames are dequeued from the QMan channel associated with this NAPI context.
* Rx, Tx confirmation and (if configured) Rx error frames all count
* towards the NAPI budget.
*/
static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
{
struct dpaa2_eth_channel *ch;
struct dpaa2_eth_priv *priv;
int rx_cleaned = 0, txconf_cleaned = 0;
struct dpaa2_eth_fq *fq, *txc_fq = NULL;
struct netdev_queue *nq;
int store_cleaned, work_done;
bool work_done_zc = false;
struct list_head rx_list;
int retries = 0;
u16 flowid;
int err;
ch = container_of(napi, struct dpaa2_eth_channel, napi);
ch->xdp.res = 0;
priv = ch->priv;
INIT_LIST_HEAD(&rx_list);
ch->rx_list = &rx_list;
if (ch->xsk_zc) {
work_done_zc = dpaa2_xsk_tx(priv, ch);
/* If we reached the XSK Tx per NAPI threshold, we're done */
if (work_done_zc) {
work_done = budget;
goto out;
}
}
do {
err = dpaa2_eth_pull_channel(ch);
if (unlikely(err))
break;
/* Refill pool if appropriate */
dpaa2_eth_refill_pool(priv, ch);
store_cleaned = dpaa2_eth_consume_frames(ch, &fq);
if (store_cleaned <= 0)
break;
if (fq->type == DPAA2_RX_FQ) {
rx_cleaned += store_cleaned;
flowid = fq->flowid;
} else {
txconf_cleaned += store_cleaned;
/* We have a single Tx conf FQ on this channel */
txc_fq = fq;
}
/* If we either consumed the whole NAPI budget with Rx frames
* or we reached the Tx confirmations threshold, we're done.
*/
if (rx_cleaned >= budget ||
txconf_cleaned >= DPAA2_ETH_TXCONF_PER_NAPI) {
work_done = budget;
if (ch->xdp.res & XDP_REDIRECT)
xdp_do_flush();
goto out;
}
} while (store_cleaned);
if (ch->xdp.res & XDP_REDIRECT)
xdp_do_flush();
/* Update NET DIM with the values for this CDAN */
dpaa2_io_update_net_dim(ch->dpio, ch->stats.frames_per_cdan,
ch->stats.bytes_per_cdan);
ch->stats.frames_per_cdan = 0;
ch->stats.bytes_per_cdan = 0;
/* We didn't consume the entire budget, so finish napi and
* re-enable data availability notifications
*/
napi_complete_done(napi, rx_cleaned);
do {
err = dpaa2_io_service_rearm(ch->dpio, &ch->nctx);
cpu_relax();
} while (err == -EBUSY && retries++ < DPAA2_ETH_SWP_BUSY_RETRIES);
WARN_ONCE(err, "CDAN notifications rearm failed on core %d",
ch->nctx.desired_cpu);
work_done = max(rx_cleaned, 1);
out:
netif_receive_skb_list(ch->rx_list);
if (ch->xsk_tx_pkts_sent) {
xsk_tx_completed(ch->xsk_pool, ch->xsk_tx_pkts_sent);
ch->xsk_tx_pkts_sent = 0;
}
if (txc_fq && txc_fq->dq_frames) {
nq = netdev_get_tx_queue(priv->net_dev, txc_fq->flowid);
netdev_tx_completed_queue(nq, txc_fq->dq_frames,
txc_fq->dq_bytes);
txc_fq->dq_frames = 0;
txc_fq->dq_bytes = 0;
}
if (rx_cleaned && ch->xdp.res & XDP_TX)
dpaa2_eth_xdp_tx_flush(priv, ch, &priv->fq[flowid]);
return work_done;
}
static void dpaa2_eth_enable_ch_napi(struct dpaa2_eth_priv *priv)
{
struct dpaa2_eth_channel *ch;
int i;
for (i = 0; i < priv->num_channels; i++) {
ch = priv->channel[i];
napi_enable(&ch->napi);
}
}
static void dpaa2_eth_disable_ch_napi(struct dpaa2_eth_priv *priv)
{
struct dpaa2_eth_channel *ch;
int i;
for (i = 0; i < priv->num_channels; i++) {
ch = priv->channel[i];
napi_disable(&ch->napi);
}
}
void dpaa2_eth_set_rx_taildrop(struct dpaa2_eth_priv *priv,
bool tx_pause, bool pfc)
{
struct dpni_taildrop td = {0};
struct dpaa2_eth_fq *fq;
int i, err;
/* FQ taildrop: threshold is in bytes, per frame queue. Enabled if
* flow control is disabled (as it might interfere with either the
* buffer pool depletion trigger for pause frames or with the group
* congestion trigger for PFC frames)
*/
td.enable = !tx_pause;
if (priv->rx_fqtd_enabled == td.enable)
goto set_cgtd;
td.threshold = DPAA2_ETH_FQ_TAILDROP_THRESH;
td.units = DPNI_CONGESTION_UNIT_BYTES;
for (i = 0; i < priv->num_fqs; i++) {
fq = &priv->fq[i];
if (fq->type != DPAA2_RX_FQ)
continue;
err = dpni_set_taildrop(priv->mc_io, 0, priv->mc_token,
DPNI_CP_QUEUE, DPNI_QUEUE_RX,
fq->tc, fq->flowid, &td);
if (err) {
netdev_err(priv->net_dev,
"dpni_set_taildrop(FQ) failed\n");
return;
}
}
priv->rx_fqtd_enabled = td.enable;
set_cgtd:
/* Congestion group taildrop: threshold is in frames, per group
* of FQs belonging to the same traffic class
* Enabled if general Tx pause disabled or if PFCs are enabled
* (congestion group threhsold for PFC generation is lower than the
* CG taildrop threshold, so it won't interfere with it; we also
* want frames in non-PFC enabled traffic classes to be kept in check)
*/
td.enable = !tx_pause || pfc;
if (priv->rx_cgtd_enabled == td.enable)
return;
td.threshold = DPAA2_ETH_CG_TAILDROP_THRESH(priv);
td.units = DPNI_CONGESTION_UNIT_FRAMES;
for (i = 0; i < dpaa2_eth_tc_count(priv); i++) {
err = dpni_set_taildrop(priv->mc_io, 0, priv->mc_token,
DPNI_CP_GROUP, DPNI_QUEUE_RX,
i, 0, &td);
if (err) {
netdev_err(priv->net_dev,
"dpni_set_taildrop(CG) failed\n");
return;
}
}
priv->rx_cgtd_enabled = td.enable;
}
static int dpaa2_eth_link_state_update(struct dpaa2_eth_priv *priv)
{
struct dpni_link_state state = {0};
bool tx_pause;
int err;
err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
if (unlikely(err)) {
netdev_err(priv->net_dev,
"dpni_get_link_state() failed\n");
return err;
}
/* If Tx pause frame settings have changed, we need to update
* Rx FQ taildrop configuration as well. We configure taildrop
* only when pause frame generation is disabled.
*/
tx_pause = dpaa2_eth_tx_pause_enabled(state.options);
dpaa2_eth_set_rx_taildrop(priv, tx_pause, priv->pfc_enabled);
/* When we manage the MAC/PHY using phylink there is no need
* to manually update the netif_carrier.
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
* We can avoid locking because we are called from the "link changed"
* IRQ handler, which is the same as the "endpoint changed" IRQ handler
* (the writer to priv->mac), so we cannot race with it.
*/
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
if (dpaa2_mac_is_type_phy(priv->mac))
goto out;
/* Chech link state; speed / duplex changes are not treated yet */
if (priv->link_state.up == state.up)
goto out;
if (state.up) {
netif_carrier_on(priv->net_dev);
netif_tx_start_all_queues(priv->net_dev);
} else {
netif_tx_stop_all_queues(priv->net_dev);
netif_carrier_off(priv->net_dev);
}
netdev_info(priv->net_dev, "Link Event: state %s\n",
state.up ? "up" : "down");
out:
priv->link_state = state;
return 0;
}
static int dpaa2_eth_open(struct net_device *net_dev)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
int err;
dpaa2_eth_seed_pools(priv);
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_lock(&priv->mac_lock);
if (!dpaa2_eth_is_type_phy(priv)) {
/* We'll only start the txqs when the link is actually ready;
* make sure we don't race against the link up notification,
* which may come immediately after dpni_enable();
*/
netif_tx_stop_all_queues(net_dev);
/* Also, explicitly set carrier off, otherwise
* netif_carrier_ok() will return true and cause 'ip link show'
* to report the LOWER_UP flag, even though the link
* notification wasn't even received.
*/
netif_carrier_off(net_dev);
}
dpaa2_eth_enable_ch_napi(priv);
err = dpni_enable(priv->mc_io, 0, priv->mc_token);
if (err < 0) {
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_unlock(&priv->mac_lock);
netdev_err(net_dev, "dpni_enable() failed\n");
goto enable_err;
}
if (dpaa2_eth_is_type_phy(priv))
dpaa2_mac_start(priv->mac);
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_unlock(&priv->mac_lock);
return 0;
enable_err:
dpaa2_eth_disable_ch_napi(priv);
dpaa2_eth_drain_pools(priv);
return err;
}
/* Total number of in-flight frames on ingress queues */
static u32 dpaa2_eth_ingress_fq_count(struct dpaa2_eth_priv *priv)
{
struct dpaa2_eth_fq *fq;
u32 fcnt = 0, bcnt = 0, total = 0;
int i, err;
for (i = 0; i < priv->num_fqs; i++) {
fq = &priv->fq[i];
err = dpaa2_io_query_fq_count(NULL, fq->fqid, &fcnt, &bcnt);
if (err) {
netdev_warn(priv->net_dev, "query_fq_count failed");
break;
}
total += fcnt;
}
return total;
}
static void dpaa2_eth_wait_for_ingress_fq_empty(struct dpaa2_eth_priv *priv)
{
int retries = 10;
u32 pending;
do {
pending = dpaa2_eth_ingress_fq_count(priv);
if (pending)
msleep(100);
} while (pending && --retries);
}
#define DPNI_TX_PENDING_VER_MAJOR 7
#define DPNI_TX_PENDING_VER_MINOR 13
static void dpaa2_eth_wait_for_egress_fq_empty(struct dpaa2_eth_priv *priv)
{
union dpni_statistics stats;
int retries = 10;
int err;
if (dpaa2_eth_cmp_dpni_ver(priv, DPNI_TX_PENDING_VER_MAJOR,
DPNI_TX_PENDING_VER_MINOR) < 0)
goto out;
do {
err = dpni_get_statistics(priv->mc_io, 0, priv->mc_token, 6,
&stats);
if (err)
goto out;
if (stats.page_6.tx_pending_frames == 0)
return;
} while (--retries);
out:
msleep(500);
}
static int dpaa2_eth_stop(struct net_device *net_dev)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
int dpni_enabled = 0;
int retries = 10;
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_lock(&priv->mac_lock);
if (dpaa2_eth_is_type_phy(priv)) {
dpaa2_mac_stop(priv->mac);
} else {
netif_tx_stop_all_queues(net_dev);
netif_carrier_off(net_dev);
}
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_unlock(&priv->mac_lock);
/* On dpni_disable(), the MC firmware will:
* - stop MAC Rx and wait for all Rx frames to be enqueued to software
* - cut off WRIOP dequeues from egress FQs and wait until transmission
* of all in flight Tx frames is finished (and corresponding Tx conf
* frames are enqueued back to software)
*
* Before calling dpni_disable(), we wait for all Tx frames to arrive
* on WRIOP. After it finishes, wait until all remaining frames on Rx
* and Tx conf queues are consumed on NAPI poll.
*/
dpaa2_eth_wait_for_egress_fq_empty(priv);
do {
dpni_disable(priv->mc_io, 0, priv->mc_token);
dpni_is_enabled(priv->mc_io, 0, priv->mc_token, &dpni_enabled);
if (dpni_enabled)
/* Allow the hardware some slack */
msleep(100);
} while (dpni_enabled && --retries);
if (!retries) {
netdev_warn(net_dev, "Retry count exceeded disabling DPNI\n");
/* Must go on and disable NAPI nonetheless, so we don't crash at
* the next "ifconfig up"
*/
}
dpaa2_eth_wait_for_ingress_fq_empty(priv);
dpaa2_eth_disable_ch_napi(priv);
/* Empty the buffer pool */
dpaa2_eth_drain_pools(priv);
/* Empty the Scatter-Gather Buffer cache */
dpaa2_eth_sgt_cache_drain(priv);
return 0;
}
static int dpaa2_eth_set_addr(struct net_device *net_dev, void *addr)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
struct device *dev = net_dev->dev.parent;
int err;
err = eth_mac_addr(net_dev, addr);
if (err < 0) {
dev_err(dev, "eth_mac_addr() failed (%d)\n", err);
return err;
}
err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
net_dev->dev_addr);
if (err) {
dev_err(dev, "dpni_set_primary_mac_addr() failed (%d)\n", err);
return err;
}
return 0;
}
/** Fill in counters maintained by the GPP driver. These may be different from
* the hardware counters obtained by ethtool.
*/
static void dpaa2_eth_get_stats(struct net_device *net_dev,
struct rtnl_link_stats64 *stats)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
struct rtnl_link_stats64 *percpu_stats;
u64 *cpustats;
u64 *netstats = (u64 *)stats;
int i, j;
int num = sizeof(struct rtnl_link_stats64) / sizeof(u64);
for_each_possible_cpu(i) {
percpu_stats = per_cpu_ptr(priv->percpu_stats, i);
cpustats = (u64 *)percpu_stats;
for (j = 0; j < num; j++)
netstats[j] += cpustats[j];
}
}
/* Copy mac unicast addresses from @net_dev to @priv.
* Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
*/
static void dpaa2_eth_add_uc_hw_addr(const struct net_device *net_dev,
struct dpaa2_eth_priv *priv)
{
struct netdev_hw_addr *ha;
int err;
netdev_for_each_uc_addr(ha, net_dev) {
err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
ha->addr);
if (err)
netdev_warn(priv->net_dev,
"Could not add ucast MAC %pM to the filtering table (err %d)\n",
ha->addr, err);
}
}
/* Copy mac multicast addresses from @net_dev to @priv
* Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
*/
static void dpaa2_eth_add_mc_hw_addr(const struct net_device *net_dev,
struct dpaa2_eth_priv *priv)
{
struct netdev_hw_addr *ha;
int err;
netdev_for_each_mc_addr(ha, net_dev) {
err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
ha->addr);
if (err)
netdev_warn(priv->net_dev,
"Could not add mcast MAC %pM to the filtering table (err %d)\n",
ha->addr, err);
}
}
static int dpaa2_eth_rx_add_vid(struct net_device *net_dev,
__be16 vlan_proto, u16 vid)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
int err;
err = dpni_add_vlan_id(priv->mc_io, 0, priv->mc_token,
vid, 0, 0, 0);
if (err) {
netdev_warn(priv->net_dev,
"Could not add the vlan id %u\n",
vid);
return err;
}
return 0;
}
static int dpaa2_eth_rx_kill_vid(struct net_device *net_dev,
__be16 vlan_proto, u16 vid)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
int err;
err = dpni_remove_vlan_id(priv->mc_io, 0, priv->mc_token, vid);
if (err) {
netdev_warn(priv->net_dev,
"Could not remove the vlan id %u\n",
vid);
return err;
}
return 0;
}
static void dpaa2_eth_set_rx_mode(struct net_device *net_dev)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
int uc_count = netdev_uc_count(net_dev);
int mc_count = netdev_mc_count(net_dev);
u8 max_mac = priv->dpni_attrs.mac_filter_entries;
u32 options = priv->dpni_attrs.options;
u16 mc_token = priv->mc_token;
struct fsl_mc_io *mc_io = priv->mc_io;
int err;
/* Basic sanity checks; these probably indicate a misconfiguration */
if (options & DPNI_OPT_NO_MAC_FILTER && max_mac != 0)
netdev_info(net_dev,
"mac_filter_entries=%d, DPNI_OPT_NO_MAC_FILTER option must be disabled\n",
max_mac);
/* Force promiscuous if the uc or mc counts exceed our capabilities. */
if (uc_count > max_mac) {
netdev_info(net_dev,
"Unicast addr count reached %d, max allowed is %d; forcing promisc\n",
uc_count, max_mac);
goto force_promisc;
}
if (mc_count + uc_count > max_mac) {
netdev_info(net_dev,
"Unicast + multicast addr count reached %d, max allowed is %d; forcing promisc\n",
uc_count + mc_count, max_mac);
goto force_mc_promisc;
}
/* Adjust promisc settings due to flag combinations */
if (net_dev->flags & IFF_PROMISC)
goto force_promisc;
if (net_dev->flags & IFF_ALLMULTI) {
/* First, rebuild unicast filtering table. This should be done
* in promisc mode, in order to avoid frame loss while we
* progressively add entries to the table.
* We don't know whether we had been in promisc already, and
* making an MC call to find out is expensive; so set uc promisc
* nonetheless.
*/
err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
if (err)
netdev_warn(net_dev, "Can't set uc promisc\n");
/* Actual uc table reconstruction. */
err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 0);
if (err)
netdev_warn(net_dev, "Can't clear uc filters\n");
dpaa2_eth_add_uc_hw_addr(net_dev, priv);
/* Finally, clear uc promisc and set mc promisc as requested. */
err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
if (err)
netdev_warn(net_dev, "Can't clear uc promisc\n");
goto force_mc_promisc;
}
/* Neither unicast, nor multicast promisc will be on... eventually.
* For now, rebuild mac filtering tables while forcing both of them on.
*/
err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
if (err)
netdev_warn(net_dev, "Can't set uc promisc (%d)\n", err);
err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
if (err)
netdev_warn(net_dev, "Can't set mc promisc (%d)\n", err);
/* Actual mac filtering tables reconstruction */
err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 1);
if (err)
netdev_warn(net_dev, "Can't clear mac filters\n");
dpaa2_eth_add_mc_hw_addr(net_dev, priv);
dpaa2_eth_add_uc_hw_addr(net_dev, priv);
/* Now we can clear both ucast and mcast promisc, without risking
* to drop legitimate frames anymore.
*/
err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
if (err)
netdev_warn(net_dev, "Can't clear ucast promisc\n");
err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 0);
if (err)
netdev_warn(net_dev, "Can't clear mcast promisc\n");
return;
force_promisc:
err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
if (err)
netdev_warn(net_dev, "Can't set ucast promisc\n");
force_mc_promisc:
err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
if (err)
netdev_warn(net_dev, "Can't set mcast promisc\n");
}
static int dpaa2_eth_set_features(struct net_device *net_dev,
netdev_features_t features)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
netdev_features_t changed = features ^ net_dev->features;
bool enable;
int err;
if (changed & NETIF_F_HW_VLAN_CTAG_FILTER) {
enable = !!(features & NETIF_F_HW_VLAN_CTAG_FILTER);
err = dpaa2_eth_set_rx_vlan_filtering(priv, enable);
if (err)
return err;
}
if (changed & NETIF_F_RXCSUM) {
enable = !!(features & NETIF_F_RXCSUM);
err = dpaa2_eth_set_rx_csum(priv, enable);
if (err)
return err;
}
if (changed & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
enable = !!(features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
err = dpaa2_eth_set_tx_csum(priv, enable);
if (err)
return err;
}
return 0;
}
static int dpaa2_eth_ts_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
struct dpaa2_eth_priv *priv = netdev_priv(dev);
struct hwtstamp_config config;
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
if (!dpaa2_ptp)
return -EINVAL;
if (copy_from_user(&config, rq->ifr_data, sizeof(config)))
return -EFAULT;
switch (config.tx_type) {
case HWTSTAMP_TX_OFF:
case HWTSTAMP_TX_ON:
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
case HWTSTAMP_TX_ONESTEP_SYNC:
priv->tx_tstamp_type = config.tx_type;
break;
default:
return -ERANGE;
}
if (config.rx_filter == HWTSTAMP_FILTER_NONE) {
priv->rx_tstamp = false;
} else {
priv->rx_tstamp = true;
/* TS is set for all frame types, not only those requested */
config.rx_filter = HWTSTAMP_FILTER_ALL;
}
dpaa2-eth: Update SINGLE_STEP register access DPAA2 MAC supports 1588 one step timestamping. If this option is enabled then for each transmitted PTP event packet, the 1588 SINGLE_STEP register is accessed to modify the following fields: -offset of the correction field inside the PTP packet -UDP checksum update bit, in case the PTP event packet has UDP encapsulation These values can change any time, because there may be multiple PTP clients connected, that receive various 1588 frame types: - L2 only frame - UDP / Ipv4 - UDP / Ipv6 - other The current implementation uses dpni_set_single_step_cfg to update the SINLGE_STEP register. Using an MC command on the Tx datapath for each transmitted 1588 message introduces high delays, leading to low throughput and consequently to a small number of supported PTP clients. Besides these, the nanosecond correction field from the PTP packet will contain the high delay from the driver which together with the originTimestamp will render timestamp values that are unacceptable in a GM clock implementation. This patch updates the Tx datapath for 1588 messages when single step timestamp is enabled and provides direct access to SINGLE_STEP register, eliminating the overhead caused by the dpni_set_single_step_cfg MC command. MC version >= 10.32 implements this functionality. If the MC version does not have support for returning the single step register base address, the driver will use dpni_set_single_step_cfg command for updates operations. All the delay introduced by dpni_set_single_step_cfg function will be eliminated (if MC version has support for returning the base address of the single step register), improving the egress driver performance for PTP packets when single step timestamping is enabled. Before these changes the maximum throughput for 1588 messages with single step hardware timestamp enabled was around 2000pps. After the updates the throughput increased up to 32.82 Mbps / 46631.02 pps. Signed-off-by: Radu Bulie <radu-andrei.bulie@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-18 22:22:01 +02:00
if (priv->tx_tstamp_type == HWTSTAMP_TX_ONESTEP_SYNC)
dpaa2_ptp_onestep_reg_update_method(priv);
return copy_to_user(rq->ifr_data, &config, sizeof(config)) ?
-EFAULT : 0;
}
static int dpaa2_eth_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
struct dpaa2_eth_priv *priv = netdev_priv(dev);
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
int err;
if (cmd == SIOCSHWTSTAMP)
return dpaa2_eth_ts_ioctl(dev, rq, cmd);
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_lock(&priv->mac_lock);
if (dpaa2_eth_is_type_phy(priv)) {
err = phylink_mii_ioctl(priv->mac->phylink, rq, cmd);
mutex_unlock(&priv->mac_lock);
return err;
}
mutex_unlock(&priv->mac_lock);
return -EOPNOTSUPP;
}
static bool xdp_mtu_valid(struct dpaa2_eth_priv *priv, int mtu)
{
int mfl, linear_mfl;
mfl = DPAA2_ETH_L2_MAX_FRM(mtu);
linear_mfl = priv->rx_buf_size - DPAA2_ETH_RX_HWA_SIZE -
dpaa2_eth_rx_head_room(priv) - XDP_PACKET_HEADROOM;
if (mfl > linear_mfl) {
netdev_warn(priv->net_dev, "Maximum MTU for XDP is %d\n",
linear_mfl - VLAN_ETH_HLEN);
return false;
}
return true;
}
static int dpaa2_eth_set_rx_mfl(struct dpaa2_eth_priv *priv, int mtu, bool has_xdp)
{
int mfl, err;
/* We enforce a maximum Rx frame length based on MTU only if we have
* an XDP program attached (in order to avoid Rx S/G frames).
* Otherwise, we accept all incoming frames as long as they are not
* larger than maximum size supported in hardware
*/
if (has_xdp)
mfl = DPAA2_ETH_L2_MAX_FRM(mtu);
else
mfl = DPAA2_ETH_MFL;
err = dpni_set_max_frame_length(priv->mc_io, 0, priv->mc_token, mfl);
if (err) {
netdev_err(priv->net_dev, "dpni_set_max_frame_length failed\n");
return err;
}
return 0;
}
static int dpaa2_eth_change_mtu(struct net_device *dev, int new_mtu)
{
struct dpaa2_eth_priv *priv = netdev_priv(dev);
int err;
if (!priv->xdp_prog)
goto out;
if (!xdp_mtu_valid(priv, new_mtu))
return -EINVAL;
err = dpaa2_eth_set_rx_mfl(priv, new_mtu, true);
if (err)
return err;
out:
WRITE_ONCE(dev->mtu, new_mtu);
return 0;
}
static int dpaa2_eth_update_rx_buffer_headroom(struct dpaa2_eth_priv *priv, bool has_xdp)
{
struct dpni_buffer_layout buf_layout = {0};
int err;
err = dpni_get_buffer_layout(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_RX, &buf_layout);
if (err) {
netdev_err(priv->net_dev, "dpni_get_buffer_layout failed\n");
return err;
}
/* Reserve extra headroom for XDP header size changes */
buf_layout.data_head_room = dpaa2_eth_rx_head_room(priv) +
(has_xdp ? XDP_PACKET_HEADROOM : 0);
buf_layout.options = DPNI_BUF_LAYOUT_OPT_DATA_HEAD_ROOM;
err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_RX, &buf_layout);
if (err) {
netdev_err(priv->net_dev, "dpni_set_buffer_layout failed\n");
return err;
}
return 0;
}
static int dpaa2_eth_setup_xdp(struct net_device *dev, struct bpf_prog *prog)
{
struct dpaa2_eth_priv *priv = netdev_priv(dev);
struct dpaa2_eth_channel *ch;
struct bpf_prog *old;
bool up, need_update;
int i, err;
if (prog && !xdp_mtu_valid(priv, dev->mtu))
return -EINVAL;
if (prog)
bpf_prog_add(prog, priv->num_channels);
up = netif_running(dev);
need_update = (!!priv->xdp_prog != !!prog);
if (up)
dev_close(dev);
/* While in xdp mode, enforce a maximum Rx frame size based on MTU.
* Also, when switching between xdp/non-xdp modes we need to reconfigure
* our Rx buffer layout. Buffer pool was drained on dpaa2_eth_stop,
* so we are sure no old format buffers will be used from now on.
*/
if (need_update) {
err = dpaa2_eth_set_rx_mfl(priv, dev->mtu, !!prog);
if (err)
goto out_err;
err = dpaa2_eth_update_rx_buffer_headroom(priv, !!prog);
if (err)
goto out_err;
}
old = xchg(&priv->xdp_prog, prog);
if (old)
bpf_prog_put(old);
for (i = 0; i < priv->num_channels; i++) {
ch = priv->channel[i];
old = xchg(&ch->xdp.prog, prog);
if (old)
bpf_prog_put(old);
}
if (up) {
err = dev_open(dev, NULL);
if (err)
return err;
}
return 0;
out_err:
if (prog)
bpf_prog_sub(prog, priv->num_channels);
if (up)
dev_open(dev, NULL);
return err;
}
static int dpaa2_eth_xdp(struct net_device *dev, struct netdev_bpf *xdp)
{
switch (xdp->command) {
case XDP_SETUP_PROG:
return dpaa2_eth_setup_xdp(dev, xdp->prog);
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
case XDP_SETUP_XSK_POOL:
return dpaa2_xsk_setup_pool(dev, xdp->xsk.pool, xdp->xsk.queue_id);
default:
return -EINVAL;
}
return 0;
}
static int dpaa2_eth_xdp_create_fd(struct net_device *net_dev,
struct xdp_frame *xdpf,
struct dpaa2_fd *fd)
{
struct device *dev = net_dev->dev.parent;
unsigned int needed_headroom;
struct dpaa2_eth_swa *swa;
void *buffer_start, *aligned_start;
dma_addr_t addr;
/* We require a minimum headroom to be able to transmit the frame.
* Otherwise return an error and let the original net_device handle it
*/
needed_headroom = dpaa2_eth_needed_headroom(NULL);
if (xdpf->headroom < needed_headroom)
return -EINVAL;
/* Setup the FD fields */
memset(fd, 0, sizeof(*fd));
/* Align FD address, if possible */
buffer_start = xdpf->data - needed_headroom;
aligned_start = PTR_ALIGN(buffer_start - DPAA2_ETH_TX_BUF_ALIGN,
DPAA2_ETH_TX_BUF_ALIGN);
if (aligned_start >= xdpf->data - xdpf->headroom)
buffer_start = aligned_start;
swa = (struct dpaa2_eth_swa *)buffer_start;
/* fill in necessary fields here */
swa->type = DPAA2_ETH_SWA_XDP;
swa->xdp.dma_size = xdpf->data + xdpf->len - buffer_start;
swa->xdp.xdpf = xdpf;
addr = dma_map_single(dev, buffer_start,
swa->xdp.dma_size,
DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, addr)))
return -ENOMEM;
dpaa2_fd_set_addr(fd, addr);
dpaa2_fd_set_offset(fd, xdpf->data - buffer_start);
dpaa2_fd_set_len(fd, xdpf->len);
dpaa2_fd_set_format(fd, dpaa2_fd_single);
dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA);
return 0;
}
static int dpaa2_eth_xdp_xmit(struct net_device *net_dev, int n,
struct xdp_frame **frames, u32 flags)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
struct dpaa2_eth_xdp_fds *xdp_redirect_fds;
struct rtnl_link_stats64 *percpu_stats;
struct dpaa2_eth_fq *fq;
struct dpaa2_fd *fds;
int enqueued, i, err;
if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
return -EINVAL;
if (!netif_running(net_dev))
return -ENETDOWN;
fq = &priv->fq[smp_processor_id()];
xdp_redirect_fds = &fq->xdp_redirect_fds;
fds = xdp_redirect_fds->fds;
percpu_stats = this_cpu_ptr(priv->percpu_stats);
/* create a FD for each xdp_frame in the list received */
for (i = 0; i < n; i++) {
err = dpaa2_eth_xdp_create_fd(net_dev, frames[i], &fds[i]);
if (err)
break;
}
xdp_redirect_fds->num = i;
/* enqueue all the frame descriptors */
enqueued = dpaa2_eth_xdp_flush(priv, fq, xdp_redirect_fds);
/* update statistics */
percpu_stats->tx_packets += enqueued;
for (i = 0; i < enqueued; i++)
percpu_stats->tx_bytes += dpaa2_fd_get_len(&fds[i]);
return enqueued;
}
static int update_xps(struct dpaa2_eth_priv *priv)
{
struct net_device *net_dev = priv->net_dev;
int i, num_queues, netdev_queues;
struct dpaa2_eth_fq *fq;
cpumask_var_t xps_mask;
int err = 0;
if (!alloc_cpumask_var(&xps_mask, GFP_KERNEL))
return -ENOMEM;
num_queues = dpaa2_eth_queue_count(priv);
netdev_queues = (net_dev->num_tc ? : 1) * num_queues;
/* The first <num_queues> entries in priv->fq array are Tx/Tx conf
* queues, so only process those
*/
for (i = 0; i < netdev_queues; i++) {
fq = &priv->fq[i % num_queues];
cpumask_clear(xps_mask);
cpumask_set_cpu(fq->target_cpu, xps_mask);
err = netif_set_xps_queue(net_dev, xps_mask, i);
if (err) {
netdev_warn_once(net_dev, "Error setting XPS queue\n");
break;
}
}
free_cpumask_var(xps_mask);
return err;
}
static int dpaa2_eth_setup_mqprio(struct net_device *net_dev,
struct tc_mqprio_qopt *mqprio)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
u8 num_tc, num_queues;
int i;
mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS;
num_queues = dpaa2_eth_queue_count(priv);
num_tc = mqprio->num_tc;
if (num_tc == net_dev->num_tc)
return 0;
if (num_tc > dpaa2_eth_tc_count(priv)) {
netdev_err(net_dev, "Max %d traffic classes supported\n",
dpaa2_eth_tc_count(priv));
return -EOPNOTSUPP;
}
if (!num_tc) {
netdev_reset_tc(net_dev);
netif_set_real_num_tx_queues(net_dev, num_queues);
goto out;
}
netdev_set_num_tc(net_dev, num_tc);
netif_set_real_num_tx_queues(net_dev, num_tc * num_queues);
for (i = 0; i < num_tc; i++)
netdev_set_tc_queue(net_dev, i, num_queues, i * num_queues);
out:
update_xps(priv);
return 0;
}
#define bps_to_mbits(rate) (div_u64((rate), 1000000) * 8)
static int dpaa2_eth_setup_tbf(struct net_device *net_dev, struct tc_tbf_qopt_offload *p)
{
struct tc_tbf_qopt_offload_replace_params *cfg = &p->replace_params;
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
struct dpni_tx_shaping_cfg tx_cr_shaper = { 0 };
struct dpni_tx_shaping_cfg tx_er_shaper = { 0 };
int err;
if (p->command == TC_TBF_STATS)
return -EOPNOTSUPP;
/* Only per port Tx shaping */
if (p->parent != TC_H_ROOT)
return -EOPNOTSUPP;
if (p->command == TC_TBF_REPLACE) {
if (cfg->max_size > DPAA2_ETH_MAX_BURST_SIZE) {
netdev_err(net_dev, "burst size cannot be greater than %d\n",
DPAA2_ETH_MAX_BURST_SIZE);
return -EINVAL;
}
tx_cr_shaper.max_burst_size = cfg->max_size;
/* The TBF interface is in bytes/s, whereas DPAA2 expects the
* rate in Mbits/s
*/
tx_cr_shaper.rate_limit = bps_to_mbits(cfg->rate.rate_bytes_ps);
}
err = dpni_set_tx_shaping(priv->mc_io, 0, priv->mc_token, &tx_cr_shaper,
&tx_er_shaper, 0);
if (err) {
netdev_err(net_dev, "dpni_set_tx_shaping() = %d\n", err);
return err;
}
return 0;
}
static int dpaa2_eth_setup_tc(struct net_device *net_dev,
enum tc_setup_type type, void *type_data)
{
switch (type) {
case TC_SETUP_QDISC_MQPRIO:
return dpaa2_eth_setup_mqprio(net_dev, type_data);
case TC_SETUP_QDISC_TBF:
return dpaa2_eth_setup_tbf(net_dev, type_data);
default:
return -EOPNOTSUPP;
}
}
static const struct net_device_ops dpaa2_eth_ops = {
.ndo_open = dpaa2_eth_open,
.ndo_start_xmit = dpaa2_eth_tx,
.ndo_stop = dpaa2_eth_stop,
.ndo_set_mac_address = dpaa2_eth_set_addr,
.ndo_get_stats64 = dpaa2_eth_get_stats,
.ndo_set_rx_mode = dpaa2_eth_set_rx_mode,
.ndo_set_features = dpaa2_eth_set_features,
.ndo_eth_ioctl = dpaa2_eth_ioctl,
.ndo_change_mtu = dpaa2_eth_change_mtu,
.ndo_bpf = dpaa2_eth_xdp,
.ndo_xdp_xmit = dpaa2_eth_xdp_xmit,
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
.ndo_xsk_wakeup = dpaa2_xsk_wakeup,
.ndo_setup_tc = dpaa2_eth_setup_tc,
.ndo_vlan_rx_add_vid = dpaa2_eth_rx_add_vid,
.ndo_vlan_rx_kill_vid = dpaa2_eth_rx_kill_vid
};
static void dpaa2_eth_cdan_cb(struct dpaa2_io_notification_ctx *ctx)
{
struct dpaa2_eth_channel *ch;
ch = container_of(ctx, struct dpaa2_eth_channel, nctx);
/* Update NAPI statistics */
ch->stats.cdan++;
/* NAPI can also be scheduled from the AF_XDP Tx path. Mark a missed
* so that it can be rescheduled again.
*/
if (!napi_if_scheduled_mark_missed(&ch->napi))
napi_schedule(&ch->napi);
}
/* Allocate and configure a DPCON object */
static struct fsl_mc_device *dpaa2_eth_setup_dpcon(struct dpaa2_eth_priv *priv)
{
struct fsl_mc_device *dpcon;
struct device *dev = priv->net_dev->dev.parent;
int err;
err = fsl_mc_object_allocate(to_fsl_mc_device(dev),
FSL_MC_POOL_DPCON, &dpcon);
if (err) {
if (err == -ENXIO) {
dev_dbg(dev, "Waiting for DPCON\n");
err = -EPROBE_DEFER;
} else {
dev_info(dev, "Not enough DPCONs, will go on as-is\n");
}
return ERR_PTR(err);
}
err = dpcon_open(priv->mc_io, 0, dpcon->obj_desc.id, &dpcon->mc_handle);
if (err) {
dev_err(dev, "dpcon_open() failed\n");
goto free;
}
err = dpcon_reset(priv->mc_io, 0, dpcon->mc_handle);
if (err) {
dev_err(dev, "dpcon_reset() failed\n");
goto close;
}
err = dpcon_enable(priv->mc_io, 0, dpcon->mc_handle);
if (err) {
dev_err(dev, "dpcon_enable() failed\n");
goto close;
}
return dpcon;
close:
dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
free:
fsl_mc_object_free(dpcon);
return ERR_PTR(err);
}
static void dpaa2_eth_free_dpcon(struct dpaa2_eth_priv *priv,
struct fsl_mc_device *dpcon)
{
dpcon_disable(priv->mc_io, 0, dpcon->mc_handle);
dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
fsl_mc_object_free(dpcon);
}
static struct dpaa2_eth_channel *dpaa2_eth_alloc_channel(struct dpaa2_eth_priv *priv)
{
struct dpaa2_eth_channel *channel;
struct dpcon_attr attr;
struct device *dev = priv->net_dev->dev.parent;
int err;
channel = kzalloc(sizeof(*channel), GFP_KERNEL);
if (!channel)
return NULL;
channel->dpcon = dpaa2_eth_setup_dpcon(priv);
if (IS_ERR(channel->dpcon)) {
err = PTR_ERR(channel->dpcon);
goto err_setup;
}
err = dpcon_get_attributes(priv->mc_io, 0, channel->dpcon->mc_handle,
&attr);
if (err) {
dev_err(dev, "dpcon_get_attributes() failed\n");
goto err_get_attr;
}
channel->dpcon_id = attr.id;
channel->ch_id = attr.qbman_ch_id;
channel->priv = priv;
return channel;
err_get_attr:
dpaa2_eth_free_dpcon(priv, channel->dpcon);
err_setup:
kfree(channel);
return ERR_PTR(err);
}
static void dpaa2_eth_free_channel(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_channel *channel)
{
dpaa2_eth_free_dpcon(priv, channel->dpcon);
kfree(channel);
}
/* DPIO setup: allocate and configure QBMan channels, setup core affinity
* and register data availability notifications
*/
static int dpaa2_eth_setup_dpio(struct dpaa2_eth_priv *priv)
{
struct dpaa2_io_notification_ctx *nctx;
struct dpaa2_eth_channel *channel;
struct dpcon_notification_cfg dpcon_notif_cfg;
struct device *dev = priv->net_dev->dev.parent;
int i, err;
/* We want the ability to spread ingress traffic (RX, TX conf) to as
* many cores as possible, so we need one channel for each core
* (unless there's fewer queues than cores, in which case the extra
* channels would be wasted).
* Allocate one channel per core and register it to the core's
* affine DPIO. If not enough channels are available for all cores
* or if some cores don't have an affine DPIO, there will be no
* ingress frame processing on those cores.
*/
cpumask_clear(&priv->dpio_cpumask);
for_each_online_cpu(i) {
/* Try to allocate a channel */
channel = dpaa2_eth_alloc_channel(priv);
if (IS_ERR_OR_NULL(channel)) {
err = PTR_ERR_OR_ZERO(channel);
if (err == -EPROBE_DEFER)
dev_dbg(dev, "waiting for affine channel\n");
else
dev_info(dev,
"No affine channel for cpu %d and above\n", i);
goto err_alloc_ch;
}
priv->channel[priv->num_channels] = channel;
nctx = &channel->nctx;
nctx->is_cdan = 1;
nctx->cb = dpaa2_eth_cdan_cb;
nctx->id = channel->ch_id;
nctx->desired_cpu = i;
/* Register the new context */
channel->dpio = dpaa2_io_service_select(i);
err = dpaa2_io_service_register(channel->dpio, nctx, dev);
if (err) {
dev_dbg(dev, "No affine DPIO for cpu %d\n", i);
/* If no affine DPIO for this core, there's probably
* none available for next cores either. Signal we want
* to retry later, in case the DPIO devices weren't
* probed yet.
*/
err = -EPROBE_DEFER;
goto err_service_reg;
}
/* Register DPCON notification with MC */
dpcon_notif_cfg.dpio_id = nctx->dpio_id;
dpcon_notif_cfg.priority = 0;
dpcon_notif_cfg.user_ctx = nctx->qman64;
err = dpcon_set_notification(priv->mc_io, 0,
channel->dpcon->mc_handle,
&dpcon_notif_cfg);
if (err) {
dev_err(dev, "dpcon_set_notification failed()\n");
goto err_set_cdan;
}
/* If we managed to allocate a channel and also found an affine
* DPIO for this core, add it to the final mask
*/
cpumask_set_cpu(i, &priv->dpio_cpumask);
priv->num_channels++;
/* Stop if we already have enough channels to accommodate all
* RX and TX conf queues
*/
if (priv->num_channels == priv->dpni_attrs.num_queues)
break;
}
return 0;
err_set_cdan:
dpaa2_io_service_deregister(channel->dpio, nctx, dev);
err_service_reg:
dpaa2_eth_free_channel(priv, channel);
err_alloc_ch:
if (err == -EPROBE_DEFER) {
for (i = 0; i < priv->num_channels; i++) {
channel = priv->channel[i];
nctx = &channel->nctx;
dpaa2_io_service_deregister(channel->dpio, nctx, dev);
dpaa2_eth_free_channel(priv, channel);
}
priv->num_channels = 0;
return err;
}
if (cpumask_empty(&priv->dpio_cpumask)) {
dev_err(dev, "No cpu with an affine DPIO/DPCON\n");
return -ENODEV;
}
dev_info(dev, "Cores %*pbl available for processing ingress traffic\n",
cpumask_pr_args(&priv->dpio_cpumask));
return 0;
}
static void dpaa2_eth_free_dpio(struct dpaa2_eth_priv *priv)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpaa2_eth_channel *ch;
int i;
/* deregister CDAN notifications and free channels */
for (i = 0; i < priv->num_channels; i++) {
ch = priv->channel[i];
dpaa2_io_service_deregister(ch->dpio, &ch->nctx, dev);
dpaa2_eth_free_channel(priv, ch);
}
}
static struct dpaa2_eth_channel *dpaa2_eth_get_affine_channel(struct dpaa2_eth_priv *priv,
int cpu)
{
struct device *dev = priv->net_dev->dev.parent;
int i;
for (i = 0; i < priv->num_channels; i++)
if (priv->channel[i]->nctx.desired_cpu == cpu)
return priv->channel[i];
/* We should never get here. Issue a warning and return
* the first channel, because it's still better than nothing
*/
dev_warn(dev, "No affine channel found for cpu %d\n", cpu);
return priv->channel[0];
}
static void dpaa2_eth_set_fq_affinity(struct dpaa2_eth_priv *priv)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpaa2_eth_fq *fq;
int rx_cpu, txc_cpu;
int i;
/* For each FQ, pick one channel/CPU to deliver frames to.
* This may well change at runtime, either through irqbalance or
* through direct user intervention.
*/
rx_cpu = txc_cpu = cpumask_first(&priv->dpio_cpumask);
for (i = 0; i < priv->num_fqs; i++) {
fq = &priv->fq[i];
switch (fq->type) {
case DPAA2_RX_FQ:
case DPAA2_RX_ERR_FQ:
fq->target_cpu = rx_cpu;
rx_cpu = cpumask_next(rx_cpu, &priv->dpio_cpumask);
if (rx_cpu >= nr_cpu_ids)
rx_cpu = cpumask_first(&priv->dpio_cpumask);
break;
case DPAA2_TX_CONF_FQ:
fq->target_cpu = txc_cpu;
txc_cpu = cpumask_next(txc_cpu, &priv->dpio_cpumask);
if (txc_cpu >= nr_cpu_ids)
txc_cpu = cpumask_first(&priv->dpio_cpumask);
break;
default:
dev_err(dev, "Unknown FQ type: %d\n", fq->type);
}
fq->channel = dpaa2_eth_get_affine_channel(priv, fq->target_cpu);
}
update_xps(priv);
}
static void dpaa2_eth_setup_fqs(struct dpaa2_eth_priv *priv)
{
int i, j;
/* We have one TxConf FQ per Tx flow.
* The number of Tx and Rx queues is the same.
* Tx queues come first in the fq array.
*/
for (i = 0; i < dpaa2_eth_queue_count(priv); i++) {
priv->fq[priv->num_fqs].type = DPAA2_TX_CONF_FQ;
priv->fq[priv->num_fqs].consume = dpaa2_eth_tx_conf;
priv->fq[priv->num_fqs++].flowid = (u16)i;
}
for (j = 0; j < dpaa2_eth_tc_count(priv); j++) {
for (i = 0; i < dpaa2_eth_queue_count(priv); i++) {
priv->fq[priv->num_fqs].type = DPAA2_RX_FQ;
priv->fq[priv->num_fqs].consume = dpaa2_eth_rx;
priv->fq[priv->num_fqs].tc = (u8)j;
priv->fq[priv->num_fqs++].flowid = (u16)i;
}
}
/* We have exactly one Rx error queue per DPNI */
priv->fq[priv->num_fqs].type = DPAA2_RX_ERR_FQ;
priv->fq[priv->num_fqs++].consume = dpaa2_eth_rx_err;
/* For each FQ, decide on which core to process incoming frames */
dpaa2_eth_set_fq_affinity(priv);
}
/* Allocate and configure a buffer pool */
struct dpaa2_eth_bp *dpaa2_eth_allocate_dpbp(struct dpaa2_eth_priv *priv)
{
struct device *dev = priv->net_dev->dev.parent;
struct fsl_mc_device *dpbp_dev;
struct dpbp_attr dpbp_attrs;
struct dpaa2_eth_bp *bp;
int err;
err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPBP,
&dpbp_dev);
if (err) {
if (err == -ENXIO)
err = -EPROBE_DEFER;
else
dev_err(dev, "DPBP device allocation failed\n");
return ERR_PTR(err);
}
bp = kzalloc(sizeof(*bp), GFP_KERNEL);
if (!bp) {
err = -ENOMEM;
goto err_alloc;
}
err = dpbp_open(priv->mc_io, 0, dpbp_dev->obj_desc.id,
&dpbp_dev->mc_handle);
if (err) {
dev_err(dev, "dpbp_open() failed\n");
goto err_open;
}
err = dpbp_reset(priv->mc_io, 0, dpbp_dev->mc_handle);
if (err) {
dev_err(dev, "dpbp_reset() failed\n");
goto err_reset;
}
err = dpbp_enable(priv->mc_io, 0, dpbp_dev->mc_handle);
if (err) {
dev_err(dev, "dpbp_enable() failed\n");
goto err_enable;
}
err = dpbp_get_attributes(priv->mc_io, 0, dpbp_dev->mc_handle,
&dpbp_attrs);
if (err) {
dev_err(dev, "dpbp_get_attributes() failed\n");
goto err_get_attr;
}
bp->dev = dpbp_dev;
bp->bpid = dpbp_attrs.bpid;
return bp;
err_get_attr:
dpbp_disable(priv->mc_io, 0, dpbp_dev->mc_handle);
err_enable:
err_reset:
dpbp_close(priv->mc_io, 0, dpbp_dev->mc_handle);
err_open:
kfree(bp);
err_alloc:
fsl_mc_object_free(dpbp_dev);
return ERR_PTR(err);
}
static int dpaa2_eth_setup_default_dpbp(struct dpaa2_eth_priv *priv)
{
struct dpaa2_eth_bp *bp;
int i;
bp = dpaa2_eth_allocate_dpbp(priv);
if (IS_ERR(bp))
return PTR_ERR(bp);
priv->bp[DPAA2_ETH_DEFAULT_BP_IDX] = bp;
priv->num_bps++;
for (i = 0; i < priv->num_channels; i++)
priv->channel[i]->bp = bp;
return 0;
}
void dpaa2_eth_free_dpbp(struct dpaa2_eth_priv *priv, struct dpaa2_eth_bp *bp)
{
int idx_bp;
/* Find the index at which this BP is stored */
for (idx_bp = 0; idx_bp < priv->num_bps; idx_bp++)
if (priv->bp[idx_bp] == bp)
break;
/* Drain the pool and disable the associated MC object */
dpaa2_eth_drain_pool(priv, bp->bpid);
dpbp_disable(priv->mc_io, 0, bp->dev->mc_handle);
dpbp_close(priv->mc_io, 0, bp->dev->mc_handle);
fsl_mc_object_free(bp->dev);
kfree(bp);
/* Move the last in use DPBP over in this position */
priv->bp[idx_bp] = priv->bp[priv->num_bps - 1];
priv->num_bps--;
}
static void dpaa2_eth_free_dpbps(struct dpaa2_eth_priv *priv)
{
int i;
for (i = 0; i < priv->num_bps; i++)
dpaa2_eth_free_dpbp(priv, priv->bp[i]);
}
static int dpaa2_eth_set_buffer_layout(struct dpaa2_eth_priv *priv)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpni_buffer_layout buf_layout = {0};
u16 rx_buf_align;
int err;
/* We need to check for WRIOP version 1.0.0, but depending on the MC
* version, this number is not always provided correctly on rev1.
* We need to check for both alternatives in this situation.
*/
if (priv->dpni_attrs.wriop_version == DPAA2_WRIOP_VERSION(0, 0, 0) ||
priv->dpni_attrs.wriop_version == DPAA2_WRIOP_VERSION(1, 0, 0))
rx_buf_align = DPAA2_ETH_RX_BUF_ALIGN_REV1;
else
rx_buf_align = DPAA2_ETH_RX_BUF_ALIGN;
/* We need to ensure that the buffer size seen by WRIOP is a multiple
* of 64 or 256 bytes depending on the WRIOP version.
*/
priv->rx_buf_size = ALIGN_DOWN(DPAA2_ETH_RX_BUF_SIZE, rx_buf_align);
/* tx buffer */
buf_layout.private_data_size = DPAA2_ETH_SWA_SIZE;
buf_layout.pass_timestamp = true;
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
buf_layout.pass_frame_status = true;
buf_layout.options = DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE |
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
DPNI_BUF_LAYOUT_OPT_TIMESTAMP |
DPNI_BUF_LAYOUT_OPT_FRAME_STATUS;
err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_TX, &buf_layout);
if (err) {
dev_err(dev, "dpni_set_buffer_layout(TX) failed\n");
return err;
}
/* tx-confirm buffer */
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
buf_layout.options = DPNI_BUF_LAYOUT_OPT_TIMESTAMP |
DPNI_BUF_LAYOUT_OPT_FRAME_STATUS;
err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_TX_CONFIRM, &buf_layout);
if (err) {
dev_err(dev, "dpni_set_buffer_layout(TX_CONF) failed\n");
return err;
}
/* Now that we've set our tx buffer layout, retrieve the minimum
* required tx data offset.
*/
err = dpni_get_tx_data_offset(priv->mc_io, 0, priv->mc_token,
&priv->tx_data_offset);
if (err) {
dev_err(dev, "dpni_get_tx_data_offset() failed\n");
return err;
}
if ((priv->tx_data_offset % 64) != 0)
dev_warn(dev, "Tx data offset (%d) not a multiple of 64B\n",
priv->tx_data_offset);
/* rx buffer */
buf_layout.pass_frame_status = true;
buf_layout.pass_parser_result = true;
buf_layout.data_align = rx_buf_align;
buf_layout.data_head_room = dpaa2_eth_rx_head_room(priv);
buf_layout.private_data_size = 0;
buf_layout.options = DPNI_BUF_LAYOUT_OPT_PARSER_RESULT |
DPNI_BUF_LAYOUT_OPT_FRAME_STATUS |
DPNI_BUF_LAYOUT_OPT_DATA_ALIGN |
DPNI_BUF_LAYOUT_OPT_DATA_HEAD_ROOM |
DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_RX, &buf_layout);
if (err) {
dev_err(dev, "dpni_set_buffer_layout(RX) failed\n");
return err;
}
return 0;
}
#define DPNI_ENQUEUE_FQID_VER_MAJOR 7
#define DPNI_ENQUEUE_FQID_VER_MINOR 9
static inline int dpaa2_eth_enqueue_qd(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_fq *fq,
struct dpaa2_fd *fd, u8 prio,
u32 num_frames __always_unused,
int *frames_enqueued)
{
int err;
err = dpaa2_io_service_enqueue_qd(fq->channel->dpio,
priv->tx_qdid, prio,
fq->tx_qdbin, fd);
if (!err && frames_enqueued)
*frames_enqueued = 1;
return err;
}
static inline int dpaa2_eth_enqueue_fq_multiple(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_fq *fq,
struct dpaa2_fd *fd,
u8 prio, u32 num_frames,
int *frames_enqueued)
{
int err;
err = dpaa2_io_service_enqueue_multiple_fq(fq->channel->dpio,
fq->tx_fqid[prio],
fd, num_frames);
if (err == 0)
return -EBUSY;
if (frames_enqueued)
*frames_enqueued = err;
return 0;
}
static void dpaa2_eth_set_enqueue_mode(struct dpaa2_eth_priv *priv)
{
if (dpaa2_eth_cmp_dpni_ver(priv, DPNI_ENQUEUE_FQID_VER_MAJOR,
DPNI_ENQUEUE_FQID_VER_MINOR) < 0)
priv->enqueue = dpaa2_eth_enqueue_qd;
else
priv->enqueue = dpaa2_eth_enqueue_fq_multiple;
}
static int dpaa2_eth_set_pause(struct dpaa2_eth_priv *priv)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpni_link_cfg link_cfg = {0};
int err;
/* Get the default link options so we don't override other flags */
err = dpni_get_link_cfg(priv->mc_io, 0, priv->mc_token, &link_cfg);
if (err) {
dev_err(dev, "dpni_get_link_cfg() failed\n");
return err;
}
/* By default, enable both Rx and Tx pause frames */
link_cfg.options |= DPNI_LINK_OPT_PAUSE;
link_cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
err = dpni_set_link_cfg(priv->mc_io, 0, priv->mc_token, &link_cfg);
if (err) {
dev_err(dev, "dpni_set_link_cfg() failed\n");
return err;
}
priv->link_state.options = link_cfg.options;
return 0;
}
static void dpaa2_eth_update_tx_fqids(struct dpaa2_eth_priv *priv)
{
struct dpni_queue_id qid = {0};
struct dpaa2_eth_fq *fq;
struct dpni_queue queue;
int i, j, err;
/* We only use Tx FQIDs for FQID-based enqueue, so check
* if DPNI version supports it before updating FQIDs
*/
if (dpaa2_eth_cmp_dpni_ver(priv, DPNI_ENQUEUE_FQID_VER_MAJOR,
DPNI_ENQUEUE_FQID_VER_MINOR) < 0)
return;
for (i = 0; i < priv->num_fqs; i++) {
fq = &priv->fq[i];
if (fq->type != DPAA2_TX_CONF_FQ)
continue;
for (j = 0; j < dpaa2_eth_tc_count(priv); j++) {
err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_TX, j, fq->flowid,
&queue, &qid);
if (err)
goto out_err;
fq->tx_fqid[j] = qid.fqid;
if (fq->tx_fqid[j] == 0)
goto out_err;
}
}
priv->enqueue = dpaa2_eth_enqueue_fq_multiple;
return;
out_err:
netdev_info(priv->net_dev,
"Error reading Tx FQID, fallback to QDID-based enqueue\n");
priv->enqueue = dpaa2_eth_enqueue_qd;
}
/* Configure ingress classification based on VLAN PCP */
static int dpaa2_eth_set_vlan_qos(struct dpaa2_eth_priv *priv)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpkg_profile_cfg kg_cfg = {0};
struct dpni_qos_tbl_cfg qos_cfg = {0};
struct dpni_rule_cfg key_params;
void *dma_mem, *key, *mask;
u8 key_size = 2; /* VLAN TCI field */
int i, pcp, err;
/* VLAN-based classification only makes sense if we have multiple
* traffic classes.
* Also, we need to extract just the 3-bit PCP field from the VLAN
* header and we can only do that by using a mask
*/
if (dpaa2_eth_tc_count(priv) == 1 || !dpaa2_eth_fs_mask_enabled(priv)) {
dev_dbg(dev, "VLAN-based QoS classification not supported\n");
return -EOPNOTSUPP;
}
dma_mem = kzalloc(DPAA2_CLASSIFIER_DMA_SIZE, GFP_KERNEL);
if (!dma_mem)
return -ENOMEM;
kg_cfg.num_extracts = 1;
kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
err = dpni_prepare_key_cfg(&kg_cfg, dma_mem);
if (err) {
dev_err(dev, "dpni_prepare_key_cfg failed\n");
goto out_free_tbl;
}
/* set QoS table */
qos_cfg.default_tc = 0;
qos_cfg.discard_on_miss = 0;
qos_cfg.key_cfg_iova = dma_map_single(dev, dma_mem,
DPAA2_CLASSIFIER_DMA_SIZE,
DMA_TO_DEVICE);
if (dma_mapping_error(dev, qos_cfg.key_cfg_iova)) {
dev_err(dev, "QoS table DMA mapping failed\n");
err = -ENOMEM;
goto out_free_tbl;
}
err = dpni_set_qos_table(priv->mc_io, 0, priv->mc_token, &qos_cfg);
if (err) {
dev_err(dev, "dpni_set_qos_table failed\n");
goto out_unmap_tbl;
}
/* Add QoS table entries */
key = kzalloc(key_size * 2, GFP_KERNEL);
if (!key) {
err = -ENOMEM;
goto out_unmap_tbl;
}
mask = key + key_size;
*(__be16 *)mask = cpu_to_be16(VLAN_PRIO_MASK);
key_params.key_iova = dma_map_single(dev, key, key_size * 2,
DMA_TO_DEVICE);
if (dma_mapping_error(dev, key_params.key_iova)) {
dev_err(dev, "Qos table entry DMA mapping failed\n");
err = -ENOMEM;
goto out_free_key;
}
key_params.mask_iova = key_params.key_iova + key_size;
key_params.key_size = key_size;
/* We add rules for PCP-based distribution starting with highest
* priority (VLAN PCP = 7). If this DPNI doesn't have enough traffic
* classes to accommodate all priority levels, the lowest ones end up
* on TC 0 which was configured as default
*/
for (i = dpaa2_eth_tc_count(priv) - 1, pcp = 7; i >= 0; i--, pcp--) {
*(__be16 *)key = cpu_to_be16(pcp << VLAN_PRIO_SHIFT);
dma_sync_single_for_device(dev, key_params.key_iova,
key_size * 2, DMA_TO_DEVICE);
err = dpni_add_qos_entry(priv->mc_io, 0, priv->mc_token,
&key_params, i, i);
if (err) {
dev_err(dev, "dpni_add_qos_entry failed\n");
dpni_clear_qos_table(priv->mc_io, 0, priv->mc_token);
goto out_unmap_key;
}
}
priv->vlan_cls_enabled = true;
/* Table and key memory is not persistent, clean everything up after
* configuration is finished
*/
out_unmap_key:
dma_unmap_single(dev, key_params.key_iova, key_size * 2, DMA_TO_DEVICE);
out_free_key:
kfree(key);
out_unmap_tbl:
dma_unmap_single(dev, qos_cfg.key_cfg_iova, DPAA2_CLASSIFIER_DMA_SIZE,
DMA_TO_DEVICE);
out_free_tbl:
kfree(dma_mem);
return err;
}
/* Configure the DPNI object this interface is associated with */
static int dpaa2_eth_setup_dpni(struct fsl_mc_device *ls_dev)
{
struct device *dev = &ls_dev->dev;
struct dpaa2_eth_priv *priv;
struct net_device *net_dev;
int err;
net_dev = dev_get_drvdata(dev);
priv = netdev_priv(net_dev);
/* get a handle for the DPNI object */
err = dpni_open(priv->mc_io, 0, ls_dev->obj_desc.id, &priv->mc_token);
if (err) {
dev_err(dev, "dpni_open() failed\n");
return err;
}
/* Check if we can work with this DPNI object */
err = dpni_get_api_version(priv->mc_io, 0, &priv->dpni_ver_major,
&priv->dpni_ver_minor);
if (err) {
dev_err(dev, "dpni_get_api_version() failed\n");
goto close;
}
if (dpaa2_eth_cmp_dpni_ver(priv, DPNI_VER_MAJOR, DPNI_VER_MINOR) < 0) {
dev_err(dev, "DPNI version %u.%u not supported, need >= %u.%u\n",
priv->dpni_ver_major, priv->dpni_ver_minor,
DPNI_VER_MAJOR, DPNI_VER_MINOR);
err = -EOPNOTSUPP;
goto close;
}
ls_dev->mc_io = priv->mc_io;
ls_dev->mc_handle = priv->mc_token;
err = dpni_reset(priv->mc_io, 0, priv->mc_token);
if (err) {
dev_err(dev, "dpni_reset() failed\n");
goto close;
}
err = dpni_get_attributes(priv->mc_io, 0, priv->mc_token,
&priv->dpni_attrs);
if (err) {
dev_err(dev, "dpni_get_attributes() failed (err=%d)\n", err);
goto close;
}
err = dpaa2_eth_set_buffer_layout(priv);
if (err)
goto close;
dpaa2_eth_set_enqueue_mode(priv);
/* Enable pause frame support */
if (dpaa2_eth_has_pause_support(priv)) {
err = dpaa2_eth_set_pause(priv);
if (err)
goto close;
}
err = dpaa2_eth_set_vlan_qos(priv);
if (err && err != -EOPNOTSUPP)
goto close;
priv->cls_rules = devm_kcalloc(dev, dpaa2_eth_fs_count(priv),
sizeof(struct dpaa2_eth_cls_rule),
GFP_KERNEL);
if (!priv->cls_rules) {
err = -ENOMEM;
goto close;
}
return 0;
close:
dpni_close(priv->mc_io, 0, priv->mc_token);
return err;
}
static void dpaa2_eth_free_dpni(struct dpaa2_eth_priv *priv)
{
int err;
err = dpni_reset(priv->mc_io, 0, priv->mc_token);
if (err)
netdev_warn(priv->net_dev, "dpni_reset() failed (err %d)\n",
err);
dpni_close(priv->mc_io, 0, priv->mc_token);
}
static int dpaa2_eth_setup_rx_flow(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_fq *fq)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpni_queue queue;
struct dpni_queue_id qid;
int err;
err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_RX, fq->tc, fq->flowid, &queue, &qid);
if (err) {
dev_err(dev, "dpni_get_queue(RX) failed\n");
return err;
}
fq->fqid = qid.fqid;
queue.destination.id = fq->channel->dpcon_id;
queue.destination.type = DPNI_DEST_DPCON;
queue.destination.priority = 1;
queue.user_context = (u64)(uintptr_t)fq;
err = dpni_set_queue(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_RX, fq->tc, fq->flowid,
DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST,
&queue);
if (err) {
dev_err(dev, "dpni_set_queue(RX) failed\n");
return err;
}
/* xdp_rxq setup */
/* only once for each channel */
if (fq->tc > 0)
return 0;
err = xdp_rxq_info_reg(&fq->channel->xdp_rxq, priv->net_dev,
fq->flowid, 0);
if (err) {
dev_err(dev, "xdp_rxq_info_reg failed\n");
return err;
}
err = xdp_rxq_info_reg_mem_model(&fq->channel->xdp_rxq,
MEM_TYPE_PAGE_ORDER0, NULL);
if (err) {
dev_err(dev, "xdp_rxq_info_reg_mem_model failed\n");
return err;
}
return 0;
}
static int dpaa2_eth_setup_tx_flow(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_fq *fq)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpni_queue queue;
struct dpni_queue_id qid;
int i, err;
for (i = 0; i < dpaa2_eth_tc_count(priv); i++) {
err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_TX, i, fq->flowid,
&queue, &qid);
if (err) {
dev_err(dev, "dpni_get_queue(TX) failed\n");
return err;
}
fq->tx_fqid[i] = qid.fqid;
}
/* All Tx queues belonging to the same flowid have the same qdbin */
fq->tx_qdbin = qid.qdbin;
err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_TX_CONFIRM, 0, fq->flowid,
&queue, &qid);
if (err) {
dev_err(dev, "dpni_get_queue(TX_CONF) failed\n");
return err;
}
fq->fqid = qid.fqid;
queue.destination.id = fq->channel->dpcon_id;
queue.destination.type = DPNI_DEST_DPCON;
queue.destination.priority = 0;
queue.user_context = (u64)(uintptr_t)fq;
err = dpni_set_queue(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_TX_CONFIRM, 0, fq->flowid,
DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST,
&queue);
if (err) {
dev_err(dev, "dpni_set_queue(TX_CONF) failed\n");
return err;
}
return 0;
}
static int setup_rx_err_flow(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_fq *fq)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpni_queue q = { { 0 } };
struct dpni_queue_id qid;
u8 q_opt = DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST;
int err;
err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_RX_ERR, 0, 0, &q, &qid);
if (err) {
dev_err(dev, "dpni_get_queue() failed (%d)\n", err);
return err;
}
fq->fqid = qid.fqid;
q.destination.id = fq->channel->dpcon_id;
q.destination.type = DPNI_DEST_DPCON;
q.destination.priority = 1;
q.user_context = (u64)(uintptr_t)fq;
err = dpni_set_queue(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_RX_ERR, 0, 0, q_opt, &q);
if (err) {
dev_err(dev, "dpni_set_queue() failed (%d)\n", err);
return err;
}
return 0;
}
/* Supported header fields for Rx hash distribution key */
static const struct dpaa2_eth_dist_fields dist_fields[] = {
{
/* L2 header */
.rxnfc_field = RXH_L2DA,
.cls_prot = NET_PROT_ETH,
.cls_field = NH_FLD_ETH_DA,
.id = DPAA2_ETH_DIST_ETHDST,
.size = 6,
}, {
.cls_prot = NET_PROT_ETH,
.cls_field = NH_FLD_ETH_SA,
.id = DPAA2_ETH_DIST_ETHSRC,
.size = 6,
}, {
/* This is the last ethertype field parsed:
* depending on frame format, it can be the MAC ethertype
* or the VLAN etype.
*/
.cls_prot = NET_PROT_ETH,
.cls_field = NH_FLD_ETH_TYPE,
.id = DPAA2_ETH_DIST_ETHTYPE,
.size = 2,
}, {
/* VLAN header */
.rxnfc_field = RXH_VLAN,
.cls_prot = NET_PROT_VLAN,
.cls_field = NH_FLD_VLAN_TCI,
.id = DPAA2_ETH_DIST_VLAN,
.size = 2,
}, {
/* IP header */
.rxnfc_field = RXH_IP_SRC,
.cls_prot = NET_PROT_IP,
.cls_field = NH_FLD_IP_SRC,
.id = DPAA2_ETH_DIST_IPSRC,
.size = 4,
}, {
.rxnfc_field = RXH_IP_DST,
.cls_prot = NET_PROT_IP,
.cls_field = NH_FLD_IP_DST,
.id = DPAA2_ETH_DIST_IPDST,
.size = 4,
}, {
.rxnfc_field = RXH_L3_PROTO,
.cls_prot = NET_PROT_IP,
.cls_field = NH_FLD_IP_PROTO,
.id = DPAA2_ETH_DIST_IPPROTO,
.size = 1,
}, {
/* Using UDP ports, this is functionally equivalent to raw
* byte pairs from L4 header.
*/
.rxnfc_field = RXH_L4_B_0_1,
.cls_prot = NET_PROT_UDP,
.cls_field = NH_FLD_UDP_PORT_SRC,
.id = DPAA2_ETH_DIST_L4SRC,
.size = 2,
}, {
.rxnfc_field = RXH_L4_B_2_3,
.cls_prot = NET_PROT_UDP,
.cls_field = NH_FLD_UDP_PORT_DST,
.id = DPAA2_ETH_DIST_L4DST,
.size = 2,
},
};
/* Configure the Rx hash key using the legacy API */
static int dpaa2_eth_config_legacy_hash_key(struct dpaa2_eth_priv *priv, dma_addr_t key)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpni_rx_tc_dist_cfg dist_cfg;
int i, err = 0;
memset(&dist_cfg, 0, sizeof(dist_cfg));
dist_cfg.key_cfg_iova = key;
dist_cfg.dist_size = dpaa2_eth_queue_count(priv);
dist_cfg.dist_mode = DPNI_DIST_MODE_HASH;
for (i = 0; i < dpaa2_eth_tc_count(priv); i++) {
err = dpni_set_rx_tc_dist(priv->mc_io, 0, priv->mc_token,
i, &dist_cfg);
if (err) {
dev_err(dev, "dpni_set_rx_tc_dist failed\n");
break;
}
}
return err;
}
/* Configure the Rx hash key using the new API */
static int dpaa2_eth_config_hash_key(struct dpaa2_eth_priv *priv, dma_addr_t key)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpni_rx_dist_cfg dist_cfg;
int i, err = 0;
memset(&dist_cfg, 0, sizeof(dist_cfg));
dist_cfg.key_cfg_iova = key;
dist_cfg.dist_size = dpaa2_eth_queue_count(priv);
dist_cfg.enable = 1;
for (i = 0; i < dpaa2_eth_tc_count(priv); i++) {
dist_cfg.tc = i;
err = dpni_set_rx_hash_dist(priv->mc_io, 0, priv->mc_token,
&dist_cfg);
if (err) {
dev_err(dev, "dpni_set_rx_hash_dist failed\n");
break;
}
/* If the flow steering / hashing key is shared between all
* traffic classes, install it just once
*/
if (priv->dpni_attrs.options & DPNI_OPT_SHARED_FS)
break;
}
return err;
}
/* Configure the Rx flow classification key */
static int dpaa2_eth_config_cls_key(struct dpaa2_eth_priv *priv, dma_addr_t key)
{
struct device *dev = priv->net_dev->dev.parent;
struct dpni_rx_dist_cfg dist_cfg;
int i, err = 0;
memset(&dist_cfg, 0, sizeof(dist_cfg));
dist_cfg.key_cfg_iova = key;
dist_cfg.dist_size = dpaa2_eth_queue_count(priv);
dist_cfg.enable = 1;
for (i = 0; i < dpaa2_eth_tc_count(priv); i++) {
dist_cfg.tc = i;
err = dpni_set_rx_fs_dist(priv->mc_io, 0, priv->mc_token,
&dist_cfg);
if (err) {
dev_err(dev, "dpni_set_rx_fs_dist failed\n");
break;
}
/* If the flow steering / hashing key is shared between all
* traffic classes, install it just once
*/
if (priv->dpni_attrs.options & DPNI_OPT_SHARED_FS)
break;
}
return err;
}
/* Size of the Rx flow classification key */
int dpaa2_eth_cls_key_size(u64 fields)
{
int i, size = 0;
for (i = 0; i < ARRAY_SIZE(dist_fields); i++) {
if (!(fields & dist_fields[i].id))
continue;
size += dist_fields[i].size;
}
return size;
}
/* Offset of header field in Rx classification key */
int dpaa2_eth_cls_fld_off(int prot, int field)
{
int i, off = 0;
for (i = 0; i < ARRAY_SIZE(dist_fields); i++) {
if (dist_fields[i].cls_prot == prot &&
dist_fields[i].cls_field == field)
return off;
off += dist_fields[i].size;
}
WARN_ONCE(1, "Unsupported header field used for Rx flow cls\n");
return 0;
}
/* Prune unused fields from the classification rule.
* Used when masking is not supported
*/
void dpaa2_eth_cls_trim_rule(void *key_mem, u64 fields)
{
int off = 0, new_off = 0;
int i, size;
for (i = 0; i < ARRAY_SIZE(dist_fields); i++) {
size = dist_fields[i].size;
if (dist_fields[i].id & fields) {
memcpy(key_mem + new_off, key_mem + off, size);
new_off += size;
}
off += size;
}
}
/* Set Rx distribution (hash or flow classification) key
* flags is a combination of RXH_ bits
*/
static int dpaa2_eth_set_dist_key(struct net_device *net_dev,
enum dpaa2_eth_rx_dist type, u64 flags)
{
struct device *dev = net_dev->dev.parent;
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
struct dpkg_profile_cfg cls_cfg;
u32 rx_hash_fields = 0;
dma_addr_t key_iova;
u8 *dma_mem;
int i;
int err = 0;
memset(&cls_cfg, 0, sizeof(cls_cfg));
for (i = 0; i < ARRAY_SIZE(dist_fields); i++) {
struct dpkg_extract *key =
&cls_cfg.extracts[cls_cfg.num_extracts];
/* For both Rx hashing and classification keys
* we set only the selected fields.
*/
if (!(flags & dist_fields[i].id))
continue;
if (type == DPAA2_ETH_RX_DIST_HASH)
rx_hash_fields |= dist_fields[i].rxnfc_field;
if (cls_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
dev_err(dev, "error adding key extraction rule, too many rules?\n");
return -E2BIG;
}
key->type = DPKG_EXTRACT_FROM_HDR;
key->extract.from_hdr.prot = dist_fields[i].cls_prot;
key->extract.from_hdr.type = DPKG_FULL_FIELD;
key->extract.from_hdr.field = dist_fields[i].cls_field;
cls_cfg.num_extracts++;
}
dma_mem = kzalloc(DPAA2_CLASSIFIER_DMA_SIZE, GFP_KERNEL);
if (!dma_mem)
return -ENOMEM;
err = dpni_prepare_key_cfg(&cls_cfg, dma_mem);
if (err) {
dev_err(dev, "dpni_prepare_key_cfg error %d\n", err);
goto free_key;
}
/* Prepare for setting the rx dist */
key_iova = dma_map_single(dev, dma_mem, DPAA2_CLASSIFIER_DMA_SIZE,
DMA_TO_DEVICE);
if (dma_mapping_error(dev, key_iova)) {
dev_err(dev, "DMA mapping failed\n");
err = -ENOMEM;
goto free_key;
}
if (type == DPAA2_ETH_RX_DIST_HASH) {
if (dpaa2_eth_has_legacy_dist(priv))
err = dpaa2_eth_config_legacy_hash_key(priv, key_iova);
else
err = dpaa2_eth_config_hash_key(priv, key_iova);
} else {
err = dpaa2_eth_config_cls_key(priv, key_iova);
}
dma_unmap_single(dev, key_iova, DPAA2_CLASSIFIER_DMA_SIZE,
DMA_TO_DEVICE);
if (!err && type == DPAA2_ETH_RX_DIST_HASH)
priv->rx_hash_fields = rx_hash_fields;
free_key:
kfree(dma_mem);
return err;
}
int dpaa2_eth_set_hash(struct net_device *net_dev, u64 flags)
{
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
u64 key = 0;
int i;
if (!dpaa2_eth_hash_enabled(priv))
return -EOPNOTSUPP;
for (i = 0; i < ARRAY_SIZE(dist_fields); i++)
if (dist_fields[i].rxnfc_field & flags)
key |= dist_fields[i].id;
return dpaa2_eth_set_dist_key(net_dev, DPAA2_ETH_RX_DIST_HASH, key);
}
int dpaa2_eth_set_cls(struct net_device *net_dev, u64 flags)
{
return dpaa2_eth_set_dist_key(net_dev, DPAA2_ETH_RX_DIST_CLS, flags);
}
static int dpaa2_eth_set_default_cls(struct dpaa2_eth_priv *priv)
{
struct device *dev = priv->net_dev->dev.parent;
int err;
/* Check if we actually support Rx flow classification */
if (dpaa2_eth_has_legacy_dist(priv)) {
dev_dbg(dev, "Rx cls not supported by current MC version\n");
return -EOPNOTSUPP;
}
if (!dpaa2_eth_fs_enabled(priv)) {
dev_dbg(dev, "Rx cls disabled in DPNI options\n");
return -EOPNOTSUPP;
}
if (!dpaa2_eth_hash_enabled(priv)) {
dev_dbg(dev, "Rx cls disabled for single queue DPNIs\n");
return -EOPNOTSUPP;
}
/* If there is no support for masking in the classification table,
* we don't set a default key, as it will depend on the rules
* added by the user at runtime.
*/
if (!dpaa2_eth_fs_mask_enabled(priv))
goto out;
err = dpaa2_eth_set_cls(priv->net_dev, DPAA2_ETH_DIST_ALL);
if (err)
return err;
out:
priv->rx_cls_enabled = 1;
return 0;
}
/* Bind the DPNI to its needed objects and resources: buffer pool, DPIOs,
* frame queues and channels
*/
static int dpaa2_eth_bind_dpni(struct dpaa2_eth_priv *priv)
{
struct dpaa2_eth_bp *bp = priv->bp[DPAA2_ETH_DEFAULT_BP_IDX];
struct net_device *net_dev = priv->net_dev;
net: dpaa2-eth: AF_XDP RX zero copy support This patch adds the support for receiving packets via the AF_XDP zero-copy mechanism in the dpaa2-eth driver. The support is available only on the LX2160A SoC and variants because we are relying on the HW capability to associate a buffer pool to a specific queue (QDBIN), only available on newer WRIOP versions. On the control path, the dpaa2_xsk_enable_pool() function is responsible to allocate a buffer pool (BP), setup this new BP to be used only on the requested queue and change the consume function to point to the XSK ZC one. We are forced to call dev_close() in order to change the queue to buffer pool association (dpaa2_xsk_set_bp_per_qdbin) . This also works in our favor since at dev_close() the buffer pools will be drained and at the later dev_open() call they will be again seeded, this time with buffers allocated from the XSK pool if needed. On the data path, a new software annotation type is defined to be used only for the XSK scenarios. This will enable us to pass keep necessary information about a packet buffer between the moment in which it was seeded and when it's received by the driver. In the XSK case, we are keeping the associated xdp_buff. Depending on the action returned by the BPF program, we will do the following: - XDP_PASS: copy the contents of the packet into a brand new skb, recycle the initial buffer. - XDP_TX: just enqueue the same frame descriptor back into the Tx path, the buffer will get automatically released into the initial BP. - XDP_REDIRECT: call xdp_do_redirect() and exit. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-18 17:18:59 +03:00
struct dpni_pools_cfg pools_params = { 0 };
struct device *dev = net_dev->dev.parent;
struct dpni_error_cfg err_cfg;
int err = 0;
int i;
pools_params.num_dpbp = 1;
pools_params.pools[0].dpbp_id = bp->dev->obj_desc.id;
pools_params.pools[0].backup_pool = 0;
pools_params.pools[0].buffer_size = priv->rx_buf_size;
err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
if (err) {
dev_err(dev, "dpni_set_pools() failed\n");
return err;
}
/* have the interface implicitly distribute traffic based on
* the default hash key
*/
err = dpaa2_eth_set_hash(net_dev, DPAA2_RXH_DEFAULT);
if (err && err != -EOPNOTSUPP)
dev_err(dev, "Failed to configure hashing\n");
/* Configure the flow classification key; it includes all
* supported header fields and cannot be modified at runtime
*/
err = dpaa2_eth_set_default_cls(priv);
if (err && err != -EOPNOTSUPP)
dev_err(dev, "Failed to configure Rx classification key\n");
/* Configure handling of error frames */
err_cfg.errors = DPAA2_FAS_RX_ERR_MASK;
err_cfg.set_frame_annotation = 1;
err_cfg.error_action = DPNI_ERROR_ACTION_DISCARD;
err = dpni_set_errors_behavior(priv->mc_io, 0, priv->mc_token,
&err_cfg);
if (err) {
dev_err(dev, "dpni_set_errors_behavior failed\n");
return err;
}
/* Configure Rx and Tx conf queues to generate CDANs */
for (i = 0; i < priv->num_fqs; i++) {
switch (priv->fq[i].type) {
case DPAA2_RX_FQ:
err = dpaa2_eth_setup_rx_flow(priv, &priv->fq[i]);
break;
case DPAA2_TX_CONF_FQ:
err = dpaa2_eth_setup_tx_flow(priv, &priv->fq[i]);
break;
case DPAA2_RX_ERR_FQ:
err = setup_rx_err_flow(priv, &priv->fq[i]);
break;
default:
dev_err(dev, "Invalid FQ type %d\n", priv->fq[i].type);
return -EINVAL;
}
if (err)
return err;
}
err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_TX, &priv->tx_qdid);
if (err) {
dev_err(dev, "dpni_get_qdid() failed\n");
return err;
}
return 0;
}
/* Allocate rings for storing incoming frame descriptors */
static int dpaa2_eth_alloc_rings(struct dpaa2_eth_priv *priv)
{
struct net_device *net_dev = priv->net_dev;
struct device *dev = net_dev->dev.parent;
int i;
for (i = 0; i < priv->num_channels; i++) {
priv->channel[i]->store =
dpaa2_io_store_create(DPAA2_ETH_STORE_SIZE, dev);
if (!priv->channel[i]->store) {
netdev_err(net_dev, "dpaa2_io_store_create() failed\n");
goto err_ring;
}
}
return 0;
err_ring:
for (i = 0; i < priv->num_channels; i++) {
if (!priv->channel[i]->store)
break;
dpaa2_io_store_destroy(priv->channel[i]->store);
}
return -ENOMEM;
}
static void dpaa2_eth_free_rings(struct dpaa2_eth_priv *priv)
{
int i;
for (i = 0; i < priv->num_channels; i++)
dpaa2_io_store_destroy(priv->channel[i]->store);
}
static int dpaa2_eth_set_mac_addr(struct dpaa2_eth_priv *priv)
{
struct net_device *net_dev = priv->net_dev;
struct device *dev = net_dev->dev.parent;
u8 mac_addr[ETH_ALEN], dpni_mac_addr[ETH_ALEN];
int err;
/* Get firmware address, if any */
err = dpni_get_port_mac_addr(priv->mc_io, 0, priv->mc_token, mac_addr);
if (err) {
dev_err(dev, "dpni_get_port_mac_addr() failed\n");
return err;
}
/* Get DPNI attributes address, if any */
err = dpni_get_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
dpni_mac_addr);
if (err) {
dev_err(dev, "dpni_get_primary_mac_addr() failed\n");
return err;
}
/* First check if firmware has any address configured by bootloader */
if (!is_zero_ether_addr(mac_addr)) {
/* If the DPMAC addr != DPNI addr, update it */
if (!ether_addr_equal(mac_addr, dpni_mac_addr)) {
err = dpni_set_primary_mac_addr(priv->mc_io, 0,
priv->mc_token,
mac_addr);
if (err) {
dev_err(dev, "dpni_set_primary_mac_addr() failed\n");
return err;
}
}
eth_hw_addr_set(net_dev, mac_addr);
} else if (is_zero_ether_addr(dpni_mac_addr)) {
/* No MAC address configured, fill in net_dev->dev_addr
* with a random one
*/
eth_hw_addr_random(net_dev);
dev_dbg_once(dev, "device(s) have all-zero hwaddr, replaced with random\n");
err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
net_dev->dev_addr);
if (err) {
dev_err(dev, "dpni_set_primary_mac_addr() failed\n");
return err;
}
/* Override NET_ADDR_RANDOM set by eth_hw_addr_random(); for all
* practical purposes, this will be our "permanent" mac address,
* at least until the next reboot. This move will also permit
* register_netdevice() to properly fill up net_dev->perm_addr.
*/
net_dev->addr_assign_type = NET_ADDR_PERM;
} else {
/* NET_ADDR_PERM is default, all we have to do is
* fill in the device addr.
*/
eth_hw_addr_set(net_dev, dpni_mac_addr);
}
return 0;
}
static int dpaa2_eth_netdev_init(struct net_device *net_dev)
{
struct device *dev = net_dev->dev.parent;
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
u32 options = priv->dpni_attrs.options;
u64 supported = 0, not_supported = 0;
u8 bcast_addr[ETH_ALEN];
u8 num_queues;
int err;
net_dev->netdev_ops = &dpaa2_eth_ops;
net_dev->ethtool_ops = &dpaa2_ethtool_ops;
err = dpaa2_eth_set_mac_addr(priv);
if (err)
return err;
/* Explicitly add the broadcast address to the MAC filtering table */
eth_broadcast_addr(bcast_addr);
err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token, bcast_addr);
if (err) {
dev_err(dev, "dpni_add_mac_addr() failed\n");
return err;
}
/* Set MTU upper limit; lower limit is 68B (default value) */
net_dev->max_mtu = DPAA2_ETH_MAX_MTU;
err = dpni_set_max_frame_length(priv->mc_io, 0, priv->mc_token,
DPAA2_ETH_MFL);
if (err) {
dev_err(dev, "dpni_set_max_frame_length() failed\n");
return err;
}
/* Set actual number of queues in the net device */
num_queues = dpaa2_eth_queue_count(priv);
err = netif_set_real_num_tx_queues(net_dev, num_queues);
if (err) {
dev_err(dev, "netif_set_real_num_tx_queues() failed\n");
return err;
}
err = netif_set_real_num_rx_queues(net_dev, num_queues);
if (err) {
dev_err(dev, "netif_set_real_num_rx_queues() failed\n");
return err;
}
dpaa2-eth: Update SINGLE_STEP register access DPAA2 MAC supports 1588 one step timestamping. If this option is enabled then for each transmitted PTP event packet, the 1588 SINGLE_STEP register is accessed to modify the following fields: -offset of the correction field inside the PTP packet -UDP checksum update bit, in case the PTP event packet has UDP encapsulation These values can change any time, because there may be multiple PTP clients connected, that receive various 1588 frame types: - L2 only frame - UDP / Ipv4 - UDP / Ipv6 - other The current implementation uses dpni_set_single_step_cfg to update the SINLGE_STEP register. Using an MC command on the Tx datapath for each transmitted 1588 message introduces high delays, leading to low throughput and consequently to a small number of supported PTP clients. Besides these, the nanosecond correction field from the PTP packet will contain the high delay from the driver which together with the originTimestamp will render timestamp values that are unacceptable in a GM clock implementation. This patch updates the Tx datapath for 1588 messages when single step timestamp is enabled and provides direct access to SINGLE_STEP register, eliminating the overhead caused by the dpni_set_single_step_cfg MC command. MC version >= 10.32 implements this functionality. If the MC version does not have support for returning the single step register base address, the driver will use dpni_set_single_step_cfg command for updates operations. All the delay introduced by dpni_set_single_step_cfg function will be eliminated (if MC version has support for returning the base address of the single step register), improving the egress driver performance for PTP packets when single step timestamping is enabled. Before these changes the maximum throughput for 1588 messages with single step hardware timestamp enabled was around 2000pps. After the updates the throughput increased up to 32.82 Mbps / 46631.02 pps. Signed-off-by: Radu Bulie <radu-andrei.bulie@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-18 22:22:01 +02:00
dpaa2_eth_detect_features(priv);
/* Capabilities listing */
supported |= IFF_LIVE_ADDR_CHANGE;
if (options & DPNI_OPT_NO_MAC_FILTER)
not_supported |= IFF_UNICAST_FLT;
else
supported |= IFF_UNICAST_FLT;
net_dev->priv_flags |= supported;
net_dev->priv_flags &= ~not_supported;
/* Features */
net_dev->features = NETIF_F_RXCSUM |
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
NETIF_F_SG | NETIF_F_HIGHDMA |
NETIF_F_LLTX | NETIF_F_HW_TC | NETIF_F_TSO;
net_dev->gso_max_segs = DPAA2_ETH_ENQUEUE_MAX_FDS;
net_dev->hw_features = net_dev->features;
drivers: net: turn on XDP features A summary of the flags being set for various drivers is given below. Note that XDP_F_REDIRECT_TARGET and XDP_F_FRAG_TARGET are features that can be turned off and on at runtime. This means that these flags may be set and unset under RTNL lock protection by the driver. Hence, READ_ONCE must be used by code loading the flag value. Also, these flags are not used for synchronization against the availability of XDP resources on a device. It is merely a hint, and hence the read may race with the actual teardown of XDP resources on the device. This may change in the future, e.g. operations taking a reference on the XDP resources of the driver, and in turn inhibiting turning off this flag. However, for now, it can only be used as a hint to check whether device supports becoming a redirection target. Turn 'hw-offload' feature flag on for: - netronome (nfp) - netdevsim. Turn 'native' and 'zerocopy' features flags on for: - intel (i40e, ice, ixgbe, igc) - mellanox (mlx5). - stmmac - netronome (nfp) Turn 'native' features flags on for: - amazon (ena) - broadcom (bnxt) - freescale (dpaa, dpaa2, enetc) - funeth - intel (igb) - marvell (mvneta, mvpp2, octeontx2) - mellanox (mlx4) - mtk_eth_soc - qlogic (qede) - sfc - socionext (netsec) - ti (cpsw) - tap - tsnep - veth - xen - virtio_net. Turn 'basic' (tx, pass, aborted and drop) features flags on for: - netronome (nfp) - cavium (thunder) - hyperv. Turn 'redirect_target' feature flag on for: - amanzon (ena) - broadcom (bnxt) - freescale (dpaa, dpaa2) - intel (i40e, ice, igb, ixgbe) - ti (cpsw) - marvell (mvneta, mvpp2) - sfc - socionext (netsec) - qlogic (qede) - mellanox (mlx5) - tap - veth - virtio_net - xen Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Acked-by: Stanislav Fomichev <sdf@google.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Co-developed-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Marek Majtyka <alardam@gmail.com> Link: https://lore.kernel.org/r/3eca9fafb308462f7edb1f58e451d59209aa07eb.1675245258.git.lorenzo@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-02-01 11:24:18 +01:00
net_dev->xdp_features = NETDEV_XDP_ACT_BASIC |
NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_NDO_XMIT;
if (priv->dpni_attrs.wriop_version >= DPAA2_WRIOP_VERSION(3, 0, 0) &&
priv->dpni_attrs.num_queues <= 8)
net_dev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
if (priv->dpni_attrs.vlan_filter_entries)
net_dev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER;
return 0;
}
static int dpaa2_eth_poll_link_state(void *arg)
{
struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)arg;
int err;
while (!kthread_should_stop()) {
err = dpaa2_eth_link_state_update(priv);
if (unlikely(err))
return err;
msleep(DPAA2_ETH_LINK_STATE_REFRESH);
}
return 0;
}
static int dpaa2_eth_connect_mac(struct dpaa2_eth_priv *priv)
{
struct fsl_mc_device *dpni_dev, *dpmac_dev;
struct dpaa2_mac *mac;
int err;
dpni_dev = to_fsl_mc_device(priv->net_dev->dev.parent);
dpmac_dev = fsl_mc_get_endpoint(dpni_dev, 0);
if (PTR_ERR(dpmac_dev) == -EPROBE_DEFER) {
netdev_dbg(priv->net_dev, "waiting for mac\n");
return PTR_ERR(dpmac_dev);
}
if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type)
return 0;
mac = kzalloc(sizeof(struct dpaa2_mac), GFP_KERNEL);
if (!mac)
return -ENOMEM;
mac->mc_dev = dpmac_dev;
mac->mc_io = priv->mc_io;
mac->net_dev = priv->net_dev;
err = dpaa2_mac_open(mac);
if (err)
goto err_free_mac;
net: dpaa2-eth: assign priv->mac after dpaa2_mac_connect() call There are 2 requirements for correct code: - Any time the driver accesses the priv->mac pointer at runtime, it either holds NULL to indicate a DPNI-DPNI connection (or unconnected DPNI), or a struct dpaa2_mac whose phylink instance was fully initialized (created and connected to the PHY). No changes are made to priv->mac while it is being used. Currently, rtnl_lock() watches over the call to dpaa2_eth_connect_mac(), so it serves the purpose of serializing this with all readers of priv->mac. - dpaa2_mac_connect() should run unlocked, because inside it are 2 phylink calls with incompatible locking requirements: phylink_create() requires that the rtnl_mutex isn't held, and phylink_fwnode_phy_connect() requires that the rtnl_mutex is held. The only way to solve those contradictory requirements is to let dpaa2_mac_connect() take rtnl_lock() when it needs to. To solve both requirements, we need to identify the writer side of the priv->mac pointer, which can be wrapped in a mutex private to the driver in a future patch. The dpaa2_mac_connect() cannot be part of the writer side critical section, because of an AB/BA deadlock with rtnl_lock(). So the strategy needs to be that where we prepare the DPMAC by calling dpaa2_mac_connect(), and only make priv->mac point to it once it's fully prepared. This ensures that the writer side critical section has the absolute minimum surface it can. The reverse strategy is adopted in the dpaa2_eth_disconnect_mac() code path. This makes sure that priv->mac is NULL when we start tearing down the DPMAC that we disconnected from, and concurrent code will simply not see it. No locking changes in this patch (concurrent code is still blocked by the rtnl_mutex). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:14 +02:00
if (dpaa2_mac_is_type_phy(mac)) {
err = dpaa2_mac_connect(mac);
if (err) {
if (err == -EPROBE_DEFER)
netdev_dbg(priv->net_dev,
"could not connect to MAC\n");
else
netdev_err(priv->net_dev,
"Error connecting to the MAC endpoint: %pe",
ERR_PTR(err));
goto err_close_mac;
}
}
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_lock(&priv->mac_lock);
net: dpaa2-eth: assign priv->mac after dpaa2_mac_connect() call There are 2 requirements for correct code: - Any time the driver accesses the priv->mac pointer at runtime, it either holds NULL to indicate a DPNI-DPNI connection (or unconnected DPNI), or a struct dpaa2_mac whose phylink instance was fully initialized (created and connected to the PHY). No changes are made to priv->mac while it is being used. Currently, rtnl_lock() watches over the call to dpaa2_eth_connect_mac(), so it serves the purpose of serializing this with all readers of priv->mac. - dpaa2_mac_connect() should run unlocked, because inside it are 2 phylink calls with incompatible locking requirements: phylink_create() requires that the rtnl_mutex isn't held, and phylink_fwnode_phy_connect() requires that the rtnl_mutex is held. The only way to solve those contradictory requirements is to let dpaa2_mac_connect() take rtnl_lock() when it needs to. To solve both requirements, we need to identify the writer side of the priv->mac pointer, which can be wrapped in a mutex private to the driver in a future patch. The dpaa2_mac_connect() cannot be part of the writer side critical section, because of an AB/BA deadlock with rtnl_lock(). So the strategy needs to be that where we prepare the DPMAC by calling dpaa2_mac_connect(), and only make priv->mac point to it once it's fully prepared. This ensures that the writer side critical section has the absolute minimum surface it can. The reverse strategy is adopted in the dpaa2_eth_disconnect_mac() code path. This makes sure that priv->mac is NULL when we start tearing down the DPMAC that we disconnected from, and concurrent code will simply not see it. No locking changes in this patch (concurrent code is still blocked by the rtnl_mutex). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:14 +02:00
priv->mac = mac;
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_unlock(&priv->mac_lock);
net: dpaa2-eth: assign priv->mac after dpaa2_mac_connect() call There are 2 requirements for correct code: - Any time the driver accesses the priv->mac pointer at runtime, it either holds NULL to indicate a DPNI-DPNI connection (or unconnected DPNI), or a struct dpaa2_mac whose phylink instance was fully initialized (created and connected to the PHY). No changes are made to priv->mac while it is being used. Currently, rtnl_lock() watches over the call to dpaa2_eth_connect_mac(), so it serves the purpose of serializing this with all readers of priv->mac. - dpaa2_mac_connect() should run unlocked, because inside it are 2 phylink calls with incompatible locking requirements: phylink_create() requires that the rtnl_mutex isn't held, and phylink_fwnode_phy_connect() requires that the rtnl_mutex is held. The only way to solve those contradictory requirements is to let dpaa2_mac_connect() take rtnl_lock() when it needs to. To solve both requirements, we need to identify the writer side of the priv->mac pointer, which can be wrapped in a mutex private to the driver in a future patch. The dpaa2_mac_connect() cannot be part of the writer side critical section, because of an AB/BA deadlock with rtnl_lock(). So the strategy needs to be that where we prepare the DPMAC by calling dpaa2_mac_connect(), and only make priv->mac point to it once it's fully prepared. This ensures that the writer side critical section has the absolute minimum surface it can. The reverse strategy is adopted in the dpaa2_eth_disconnect_mac() code path. This makes sure that priv->mac is NULL when we start tearing down the DPMAC that we disconnected from, and concurrent code will simply not see it. No locking changes in this patch (concurrent code is still blocked by the rtnl_mutex). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:14 +02:00
return 0;
err_close_mac:
dpaa2_mac_close(mac);
err_free_mac:
kfree(mac);
return err;
}
static void dpaa2_eth_disconnect_mac(struct dpaa2_eth_priv *priv)
{
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
struct dpaa2_mac *mac;
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_lock(&priv->mac_lock);
mac = priv->mac;
net: dpaa2-eth: assign priv->mac after dpaa2_mac_connect() call There are 2 requirements for correct code: - Any time the driver accesses the priv->mac pointer at runtime, it either holds NULL to indicate a DPNI-DPNI connection (or unconnected DPNI), or a struct dpaa2_mac whose phylink instance was fully initialized (created and connected to the PHY). No changes are made to priv->mac while it is being used. Currently, rtnl_lock() watches over the call to dpaa2_eth_connect_mac(), so it serves the purpose of serializing this with all readers of priv->mac. - dpaa2_mac_connect() should run unlocked, because inside it are 2 phylink calls with incompatible locking requirements: phylink_create() requires that the rtnl_mutex isn't held, and phylink_fwnode_phy_connect() requires that the rtnl_mutex is held. The only way to solve those contradictory requirements is to let dpaa2_mac_connect() take rtnl_lock() when it needs to. To solve both requirements, we need to identify the writer side of the priv->mac pointer, which can be wrapped in a mutex private to the driver in a future patch. The dpaa2_mac_connect() cannot be part of the writer side critical section, because of an AB/BA deadlock with rtnl_lock(). So the strategy needs to be that where we prepare the DPMAC by calling dpaa2_mac_connect(), and only make priv->mac point to it once it's fully prepared. This ensures that the writer side critical section has the absolute minimum surface it can. The reverse strategy is adopted in the dpaa2_eth_disconnect_mac() code path. This makes sure that priv->mac is NULL when we start tearing down the DPMAC that we disconnected from, and concurrent code will simply not see it. No locking changes in this patch (concurrent code is still blocked by the rtnl_mutex). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:14 +02:00
priv->mac = NULL;
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_unlock(&priv->mac_lock);
net: dpaa2-eth: assign priv->mac after dpaa2_mac_connect() call There are 2 requirements for correct code: - Any time the driver accesses the priv->mac pointer at runtime, it either holds NULL to indicate a DPNI-DPNI connection (or unconnected DPNI), or a struct dpaa2_mac whose phylink instance was fully initialized (created and connected to the PHY). No changes are made to priv->mac while it is being used. Currently, rtnl_lock() watches over the call to dpaa2_eth_connect_mac(), so it serves the purpose of serializing this with all readers of priv->mac. - dpaa2_mac_connect() should run unlocked, because inside it are 2 phylink calls with incompatible locking requirements: phylink_create() requires that the rtnl_mutex isn't held, and phylink_fwnode_phy_connect() requires that the rtnl_mutex is held. The only way to solve those contradictory requirements is to let dpaa2_mac_connect() take rtnl_lock() when it needs to. To solve both requirements, we need to identify the writer side of the priv->mac pointer, which can be wrapped in a mutex private to the driver in a future patch. The dpaa2_mac_connect() cannot be part of the writer side critical section, because of an AB/BA deadlock with rtnl_lock(). So the strategy needs to be that where we prepare the DPMAC by calling dpaa2_mac_connect(), and only make priv->mac point to it once it's fully prepared. This ensures that the writer side critical section has the absolute minimum surface it can. The reverse strategy is adopted in the dpaa2_eth_disconnect_mac() code path. This makes sure that priv->mac is NULL when we start tearing down the DPMAC that we disconnected from, and concurrent code will simply not see it. No locking changes in this patch (concurrent code is still blocked by the rtnl_mutex). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:14 +02:00
if (!mac)
return;
net: dpaa2-eth: assign priv->mac after dpaa2_mac_connect() call There are 2 requirements for correct code: - Any time the driver accesses the priv->mac pointer at runtime, it either holds NULL to indicate a DPNI-DPNI connection (or unconnected DPNI), or a struct dpaa2_mac whose phylink instance was fully initialized (created and connected to the PHY). No changes are made to priv->mac while it is being used. Currently, rtnl_lock() watches over the call to dpaa2_eth_connect_mac(), so it serves the purpose of serializing this with all readers of priv->mac. - dpaa2_mac_connect() should run unlocked, because inside it are 2 phylink calls with incompatible locking requirements: phylink_create() requires that the rtnl_mutex isn't held, and phylink_fwnode_phy_connect() requires that the rtnl_mutex is held. The only way to solve those contradictory requirements is to let dpaa2_mac_connect() take rtnl_lock() when it needs to. To solve both requirements, we need to identify the writer side of the priv->mac pointer, which can be wrapped in a mutex private to the driver in a future patch. The dpaa2_mac_connect() cannot be part of the writer side critical section, because of an AB/BA deadlock with rtnl_lock(). So the strategy needs to be that where we prepare the DPMAC by calling dpaa2_mac_connect(), and only make priv->mac point to it once it's fully prepared. This ensures that the writer side critical section has the absolute minimum surface it can. The reverse strategy is adopted in the dpaa2_eth_disconnect_mac() code path. This makes sure that priv->mac is NULL when we start tearing down the DPMAC that we disconnected from, and concurrent code will simply not see it. No locking changes in this patch (concurrent code is still blocked by the rtnl_mutex). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:14 +02:00
if (dpaa2_mac_is_type_phy(mac))
dpaa2_mac_disconnect(mac);
dpaa2_mac_close(mac);
kfree(mac);
}
static irqreturn_t dpni_irq0_handler_thread(int irq_num, void *arg)
{
u32 status = ~0;
struct device *dev = (struct device *)arg;
struct fsl_mc_device *dpni_dev = to_fsl_mc_device(dev);
struct net_device *net_dev = dev_get_drvdata(dev);
struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
bool had_mac;
int err;
err = dpni_get_irq_status(dpni_dev->mc_io, 0, dpni_dev->mc_handle,
DPNI_IRQ_INDEX, &status);
if (unlikely(err)) {
netdev_err(net_dev, "Can't get irq status (err %d)\n", err);
return IRQ_HANDLED;
}
if (status & DPNI_IRQ_EVENT_LINK_CHANGED)
dpaa2_eth_link_state_update(netdev_priv(net_dev));
if (status & DPNI_IRQ_EVENT_ENDPOINT_CHANGED) {
dpaa2_eth_set_mac_addr(netdev_priv(net_dev));
dpaa2_eth_update_tx_fqids(priv);
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
/* We can avoid locking because the "endpoint changed" IRQ
* handler is the only one who changes priv->mac at runtime,
* so we are not racing with anyone.
*/
had_mac = !!priv->mac;
if (had_mac)
dpaa2_eth_disconnect_mac(priv);
else
dpaa2_eth_connect_mac(priv);
}
return IRQ_HANDLED;
}
static int dpaa2_eth_setup_irqs(struct fsl_mc_device *ls_dev)
{
int err = 0;
struct fsl_mc_device_irq *irq;
err = fsl_mc_allocate_irqs(ls_dev);
if (err) {
dev_err(&ls_dev->dev, "MC irqs allocation failed\n");
return err;
}
irq = ls_dev->irqs[0];
err = devm_request_threaded_irq(&ls_dev->dev, irq->virq,
NULL, dpni_irq0_handler_thread,
IRQF_NO_SUSPEND | IRQF_ONESHOT,
dev_name(&ls_dev->dev), &ls_dev->dev);
if (err < 0) {
dev_err(&ls_dev->dev, "devm_request_threaded_irq(): %d\n", err);
goto free_mc_irq;
}
err = dpni_set_irq_mask(ls_dev->mc_io, 0, ls_dev->mc_handle,
DPNI_IRQ_INDEX, DPNI_IRQ_EVENT_LINK_CHANGED |
DPNI_IRQ_EVENT_ENDPOINT_CHANGED);
if (err < 0) {
dev_err(&ls_dev->dev, "dpni_set_irq_mask(): %d\n", err);
goto free_irq;
}
err = dpni_set_irq_enable(ls_dev->mc_io, 0, ls_dev->mc_handle,
DPNI_IRQ_INDEX, 1);
if (err < 0) {
dev_err(&ls_dev->dev, "dpni_set_irq_enable(): %d\n", err);
goto free_irq;
}
return 0;
free_irq:
devm_free_irq(&ls_dev->dev, irq->virq, &ls_dev->dev);
free_mc_irq:
fsl_mc_free_irqs(ls_dev);
return err;
}
static void dpaa2_eth_add_ch_napi(struct dpaa2_eth_priv *priv)
{
int i;
struct dpaa2_eth_channel *ch;
for (i = 0; i < priv->num_channels; i++) {
ch = priv->channel[i];
/* NAPI weight *MUST* be a multiple of DPAA2_ETH_STORE_SIZE */
netif_napi_add(priv->net_dev, &ch->napi, dpaa2_eth_poll);
}
}
static void dpaa2_eth_del_ch_napi(struct dpaa2_eth_priv *priv)
{
int i;
struct dpaa2_eth_channel *ch;
for (i = 0; i < priv->num_channels; i++) {
ch = priv->channel[i];
netif_napi_del(&ch->napi);
}
}
static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
{
struct device *dev;
struct net_device *net_dev = NULL;
struct dpaa2_eth_priv *priv = NULL;
int err = 0;
dev = &dpni_dev->dev;
/* Net device */
net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA2_ETH_MAX_NETDEV_QUEUES);
if (!net_dev) {
dev_err(dev, "alloc_etherdev_mq() failed\n");
return -ENOMEM;
}
SET_NETDEV_DEV(net_dev, dev);
dev_set_drvdata(dev, net_dev);
priv = netdev_priv(net_dev);
priv->net_dev = net_dev;
SET_NETDEV_DEVLINK_PORT(net_dev, &priv->devlink_port);
net: dpaa2-eth: serialize changes to priv->mac with a mutex The dpaa2 architecture permits dynamic connections between objects on the fsl-mc bus, specifically between a DPNI object (represented by a struct net_device) and a DPMAC object (represented by a struct phylink). The DPNI driver is notified when those connections are created/broken through the dpni_irq0_handler_thread() method. To ensure that ethtool operations, as well as netdev up/down operations serialize with the connection/disconnection of the DPNI with a DPMAC, dpni_irq0_handler_thread() takes the rtnl_lock() to block those other operations from taking place. There is code called by dpaa2_mac_connect() which wants to acquire the rtnl_mutex once again, see phylink_create() -> phylink_register_sfp() -> sfp_bus_add_upstream() -> rtnl_lock(). So the strategy doesn't quite work out, even though it's fairly simple. Create a different strategy, where all code paths in the dpaa2-eth driver access priv->mac only while they are holding priv->mac_lock. The phylink instance is not created or connected to the PHY under the priv->mac_lock, but only assigned to priv->mac then. This will eliminate the reliance on the rtnl_mutex. Add lockdep annotations and put comments where holding the lock is not necessary, and priv->mac can be dereferenced freely. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-11-29 16:12:19 +02:00
mutex_init(&priv->mac_lock);
priv->iommu_domain = iommu_get_domain_for_dev(dev);
priv->tx_tstamp_type = HWTSTAMP_TX_OFF;
priv->rx_tstamp = false;
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
priv->dpaa2_ptp_wq = alloc_workqueue("dpaa2_ptp_wq", 0, 0);
if (!priv->dpaa2_ptp_wq) {
err = -ENOMEM;
goto err_wq_alloc;
}
INIT_WORK(&priv->tx_onestep_tstamp, dpaa2_eth_tx_onestep_tstamp);
mutex_init(&priv->onestep_tstamp_lock);
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
skb_queue_head_init(&priv->tx_skbs);
priv->rx_copybreak = DPAA2_ETH_DEFAULT_COPYBREAK;
/* Obtain a MC portal */
err = fsl_mc_portal_allocate(dpni_dev, FSL_MC_IO_ATOMIC_CONTEXT_PORTAL,
&priv->mc_io);
if (err) {
if (err == -ENXIO) {
dev_dbg(dev, "waiting for MC portal\n");
err = -EPROBE_DEFER;
} else {
dev_err(dev, "MC portal allocation failed\n");
}
goto err_portal_alloc;
}
/* MC objects initialization and configuration */
err = dpaa2_eth_setup_dpni(dpni_dev);
if (err)
goto err_dpni_setup;
err = dpaa2_eth_setup_dpio(priv);
if (err)
goto err_dpio_setup;
dpaa2_eth_setup_fqs(priv);
err = dpaa2_eth_setup_default_dpbp(priv);
if (err)
goto err_dpbp_setup;
err = dpaa2_eth_bind_dpni(priv);
if (err)
goto err_bind;
/* Add a NAPI context for each channel */
dpaa2_eth_add_ch_napi(priv);
/* Percpu statistics */
priv->percpu_stats = alloc_percpu(*priv->percpu_stats);
if (!priv->percpu_stats) {
dev_err(dev, "alloc_percpu(percpu_stats) failed\n");
err = -ENOMEM;
goto err_alloc_percpu_stats;
}
priv->percpu_extras = alloc_percpu(*priv->percpu_extras);
if (!priv->percpu_extras) {
dev_err(dev, "alloc_percpu(percpu_extras) failed\n");
err = -ENOMEM;
goto err_alloc_percpu_extras;
}
priv->sgt_cache = alloc_percpu(*priv->sgt_cache);
if (!priv->sgt_cache) {
dev_err(dev, "alloc_percpu(sgt_cache) failed\n");
err = -ENOMEM;
goto err_alloc_sgt_cache;
}
priv->fd = alloc_percpu(*priv->fd);
if (!priv->fd) {
dev_err(dev, "alloc_percpu(fds) failed\n");
err = -ENOMEM;
goto err_alloc_fds;
}
err = dpaa2_eth_netdev_init(net_dev);
if (err)
goto err_netdev_init;
/* Configure checksum offload based on current interface flags */
err = dpaa2_eth_set_rx_csum(priv, !!(net_dev->features & NETIF_F_RXCSUM));
if (err)
goto err_csum;
err = dpaa2_eth_set_tx_csum(priv,
!!(net_dev->features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)));
if (err)
goto err_csum;
err = dpaa2_eth_alloc_rings(priv);
if (err)
goto err_alloc_rings;
#ifdef CONFIG_FSL_DPAA2_ETH_DCB
if (dpaa2_eth_has_pause_support(priv) && priv->vlan_cls_enabled) {
priv->dcbx_mode = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE;
net_dev->dcbnl_ops = &dpaa2_eth_dcbnl_ops;
} else {
dev_dbg(dev, "PFC not supported\n");
}
#endif
err = dpaa2_eth_connect_mac(priv);
if (err)
goto err_connect_mac;
err = dpaa2_eth_setup_irqs(dpni_dev);
if (err) {
netdev_warn(net_dev, "Failed to set link interrupt, fall back to polling\n");
priv->poll_thread = kthread_run(dpaa2_eth_poll_link_state, priv,
"%s_poll_link", net_dev->name);
if (IS_ERR(priv->poll_thread)) {
dev_err(dev, "Error starting polling thread\n");
goto err_poll_thread;
}
priv->do_link_poll = true;
}
err = dpaa2_eth_dl_alloc(priv);
if (err)
goto err_dl_register;
err = dpaa2_eth_dl_traps_register(priv);
if (err)
goto err_dl_trap_register;
err = dpaa2_eth_dl_port_add(priv);
if (err)
goto err_dl_port_add;
net_dev->needed_headroom = DPAA2_ETH_SWA_SIZE + DPAA2_ETH_TX_BUF_ALIGN;
err = register_netdev(net_dev);
if (err < 0) {
dev_err(dev, "register_netdev() failed\n");
goto err_netdev_reg;
}
#ifdef CONFIG_DEBUG_FS
dpaa2_dbg_add(priv);
#endif
dpaa2_eth_dl_register(priv);
dev_info(dev, "Probed interface %s\n", net_dev->name);
return 0;
err_netdev_reg:
dpaa2_eth_dl_port_del(priv);
err_dl_port_add:
dpaa2_eth_dl_traps_unregister(priv);
err_dl_trap_register:
dpaa2_eth_dl_free(priv);
err_dl_register:
if (priv->do_link_poll)
kthread_stop(priv->poll_thread);
else
fsl_mc_free_irqs(dpni_dev);
err_poll_thread:
dpaa2_eth_disconnect_mac(priv);
err_connect_mac:
dpaa2_eth_free_rings(priv);
err_alloc_rings:
err_csum:
err_netdev_init:
free_percpu(priv->fd);
err_alloc_fds:
free_percpu(priv->sgt_cache);
err_alloc_sgt_cache:
free_percpu(priv->percpu_extras);
err_alloc_percpu_extras:
free_percpu(priv->percpu_stats);
err_alloc_percpu_stats:
dpaa2_eth_del_ch_napi(priv);
err_bind:
dpaa2_eth_free_dpbps(priv);
err_dpbp_setup:
dpaa2_eth_free_dpio(priv);
err_dpio_setup:
dpaa2_eth_free_dpni(priv);
err_dpni_setup:
fsl_mc_portal_free(priv->mc_io);
err_portal_alloc:
dpaa2-eth: support PTP Sync packet one-step timestamping This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-18 17:08:02 +08:00
destroy_workqueue(priv->dpaa2_ptp_wq);
err_wq_alloc:
dev_set_drvdata(dev, NULL);
free_netdev(net_dev);
return err;
}
static void dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
{
struct device *dev;
struct net_device *net_dev;
struct dpaa2_eth_priv *priv;
dev = &ls_dev->dev;
net_dev = dev_get_drvdata(dev);
priv = netdev_priv(net_dev);
dpaa2_eth_dl_unregister(priv);
#ifdef CONFIG_DEBUG_FS
dpaa2_dbg_remove(priv);
#endif
unregister_netdev(net_dev);
dpaa2_eth_dl_port_del(priv);
dpaa2_eth_dl_traps_unregister(priv);
dpaa2_eth_dl_free(priv);
if (priv->do_link_poll)
kthread_stop(priv->poll_thread);
else
fsl_mc_free_irqs(ls_dev);
dpaa2_eth_disconnect_mac(priv);
dpaa2_eth_free_rings(priv);
free_percpu(priv->fd);
free_percpu(priv->sgt_cache);
free_percpu(priv->percpu_stats);
free_percpu(priv->percpu_extras);
dpaa2_eth_del_ch_napi(priv);
dpaa2_eth_free_dpbps(priv);
dpaa2_eth_free_dpio(priv);
dpaa2_eth_free_dpni(priv);
dpaa2-eth: Update SINGLE_STEP register access DPAA2 MAC supports 1588 one step timestamping. If this option is enabled then for each transmitted PTP event packet, the 1588 SINGLE_STEP register is accessed to modify the following fields: -offset of the correction field inside the PTP packet -UDP checksum update bit, in case the PTP event packet has UDP encapsulation These values can change any time, because there may be multiple PTP clients connected, that receive various 1588 frame types: - L2 only frame - UDP / Ipv4 - UDP / Ipv6 - other The current implementation uses dpni_set_single_step_cfg to update the SINLGE_STEP register. Using an MC command on the Tx datapath for each transmitted 1588 message introduces high delays, leading to low throughput and consequently to a small number of supported PTP clients. Besides these, the nanosecond correction field from the PTP packet will contain the high delay from the driver which together with the originTimestamp will render timestamp values that are unacceptable in a GM clock implementation. This patch updates the Tx datapath for 1588 messages when single step timestamp is enabled and provides direct access to SINGLE_STEP register, eliminating the overhead caused by the dpni_set_single_step_cfg MC command. MC version >= 10.32 implements this functionality. If the MC version does not have support for returning the single step register base address, the driver will use dpni_set_single_step_cfg command for updates operations. All the delay introduced by dpni_set_single_step_cfg function will be eliminated (if MC version has support for returning the base address of the single step register), improving the egress driver performance for PTP packets when single step timestamping is enabled. Before these changes the maximum throughput for 1588 messages with single step hardware timestamp enabled was around 2000pps. After the updates the throughput increased up to 32.82 Mbps / 46631.02 pps. Signed-off-by: Radu Bulie <radu-andrei.bulie@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-18 22:22:01 +02:00
if (priv->onestep_reg_base)
iounmap(priv->onestep_reg_base);
fsl_mc_portal_free(priv->mc_io);
destroy_workqueue(priv->dpaa2_ptp_wq);
dev_dbg(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
free_netdev(net_dev);
}
static const struct fsl_mc_device_id dpaa2_eth_match_id_table[] = {
{
.vendor = FSL_MC_VENDOR_FREESCALE,
.obj_type = "dpni",
},
{ .vendor = 0x0 }
};
MODULE_DEVICE_TABLE(fslmc, dpaa2_eth_match_id_table);
static struct fsl_mc_driver dpaa2_eth_driver = {
.driver = {
.name = KBUILD_MODNAME,
},
.probe = dpaa2_eth_probe,
.remove = dpaa2_eth_remove,
.match_id_table = dpaa2_eth_match_id_table
};
static int __init dpaa2_eth_driver_init(void)
{
int err;
dpaa2_eth_dbg_init();
err = fsl_mc_driver_register(&dpaa2_eth_driver);
if (err) {
dpaa2_eth_dbg_exit();
return err;
}
return 0;
}
static void __exit dpaa2_eth_driver_exit(void)
{
dpaa2_eth_dbg_exit();
fsl_mc_driver_unregister(&dpaa2_eth_driver);
}
module_init(dpaa2_eth_driver_init);
module_exit(dpaa2_eth_driver_exit);