2018-09-19 17:42:55 -07:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
|
|
|
/* Copyright (c) 2018, Intel Corporation. */
|
|
|
|
|
|
|
|
#include "ice.h"
|
2022-02-22 16:26:59 -08:00
|
|
|
#include "ice_vf_lib_private.h"
|
2019-10-24 01:11:17 -07:00
|
|
|
#include "ice_base.h"
|
2018-09-19 17:42:55 -07:00
|
|
|
#include "ice_lib.h"
|
2020-05-07 17:41:08 -07:00
|
|
|
#include "ice_fltr.h"
|
2021-09-13 11:22:19 -07:00
|
|
|
#include "ice_dcb_lib.h"
|
2021-04-13 08:48:39 +08:00
|
|
|
#include "ice_flow.h"
|
2021-08-19 17:08:56 -07:00
|
|
|
#include "ice_eswitch.h"
|
2021-03-02 10:12:01 -08:00
|
|
|
#include "ice_virtchnl_allowlist.h"
|
2021-07-16 15:16:43 -07:00
|
|
|
#include "ice_flex_pipe.h"
|
2021-12-02 08:38:46 -08:00
|
|
|
#include "ice_vf_vsi_vlan_ops.h"
|
ice: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2
Add support for the VF driver to be able to request
VIRTCHNL_VF_OFFLOAD_VLAN_V2, negotiate its VLAN capabilities via
VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, add/delete VLAN filters, and
enable/disable VLAN offloads.
VFs supporting VIRTCHNL_OFFLOAD_VLAN_V2 will be able to use the
following virtchnl opcodes:
VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS
VIRTCHNL_OP_ADD_VLAN_V2
VIRTCHNL_OP_DEL_VLAN_V2
VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2
VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2
VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2
VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2
Legacy VF drivers may expect the initial VLAN stripping settings to be
configured by the PF, so the PF initializes VLAN stripping based on the
VIRTCHNL_OP_GET_VF_RESOURCES opcode. However, with VLAN support via
VIRTCHNL_VF_OFFLOAD_VLAN_V2, this function is only expected to be used
for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN, which will only
be supported when a port VLAN is configured. Update the function
based on the new expectations. Also, change the message when the PF
can't enable/disable VLAN stripping to a dev_dbg() as this isn't fatal.
When a VF isn't in a port VLAN and it only supports
VIRTCHNL_VF_OFFLOAD_VLAN when Double VLAN Mode (DVM) is enabled, then
the PF needs to reject the VIRTCHNL_VF_OFFLOAD_VLAN capability and
configure the VF in software only VLAN mode. To do this add the new
function ice_vf_vsi_cfg_legacy_vlan_mode(), which updates the VF's
inner and outer ice_vsi_vlan_ops functions and sets up software only
VLAN mode.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-02 08:38:48 -08:00
|
|
|
#include "ice_vlan.h"
|
2018-09-19 17:42:55 -07:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
/**
|
|
|
|
* ice_free_vf_entries - Free all VF entries from the hash table
|
|
|
|
* @pf: pointer to the PF structure
|
|
|
|
*
|
|
|
|
* Iterate over the VF hash table, removing and releasing all VF entries.
|
|
|
|
* Called during VF teardown or as cleanup during failed VF initialization.
|
|
|
|
*/
|
|
|
|
static void ice_free_vf_entries(struct ice_pf *pf)
|
|
|
|
{
|
|
|
|
struct ice_vfs *vfs = &pf->vfs;
|
|
|
|
struct hlist_node *tmp;
|
|
|
|
struct ice_vf *vf;
|
|
|
|
unsigned int bkt;
|
|
|
|
|
|
|
|
/* Remove all VFs from the hash table and release their main
|
|
|
|
* reference. Once all references to the VF are dropped, ice_put_vf()
|
|
|
|
* will call ice_release_vf which will remove the VF memory.
|
|
|
|
*/
|
|
|
|
lockdep_assert_held(&vfs->table_lock);
|
|
|
|
|
|
|
|
hash_for_each_safe(vfs->table, bkt, tmp, vf, entry) {
|
|
|
|
hash_del_rcu(&vf->entry);
|
|
|
|
ice_put_vf(vf);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/**
|
|
|
|
* ice_free_vf_res - Free a VF's resources
|
|
|
|
* @vf: pointer to the VF info
|
|
|
|
*/
|
|
|
|
static void ice_free_vf_res(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = vf->pf;
|
2019-04-16 10:30:52 -07:00
|
|
|
int i, last_vector_idx;
|
2018-09-19 17:42:55 -07:00
|
|
|
|
|
|
|
/* First, disable VF's configuration API to prevent OS from
|
|
|
|
* accessing the VF's VSI after it's freed or invalidated.
|
|
|
|
*/
|
|
|
|
clear_bit(ICE_VF_STATE_INIT, vf->vf_states);
|
ice: Enable FDIR Configure for AVF
The virtual channel is going to be extended to support FDIR and
RSS configure from AVF. New data structures and OP codes will be
added, the patch enable the FDIR part.
To support above advanced AVF feature, we need to figure out
what kind of data structure should be passed from VF to PF to describe
an FDIR rule or RSS config rule. The common part of the requirement is
we need a data structure to represent the input set selection of a rule's
hash key.
An input set selection is a group of fields be selected from one or more
network protocol layers that could be identified as a specific flow.
For example, select dst IP address from an IPv4 header combined with
dst port from the TCP header as the input set for an IPv4/TCP flow.
The patch adds a new data structure virtchnl_proto_hdrs to abstract
a network protocol headers group which is composed of layers of network
protocol header(virtchnl_proto_hdr).
A protocol header contains a 32 bits mask (field_selector) to describe
which fields are selected as input sets, as well as a header type
(enum virtchnl_proto_hdr_type). Each bit is mapped to a field in
enum virtchnl_proto_hdr_field guided by its header type.
+------------+-----------+------------------------------+
| | Proto Hdr | Header Type A |
| | +------------------------------+
| | | BIT 31 | ... | BIT 1 | BIT 0 |
| |-----------+------------------------------+
|Proto Hdrs | Proto Hdr | Header Type B |
| | +------------------------------+
| | | BIT 31 | ... | BIT 1 | BIT 0 |
| |-----------+------------------------------+
| | Proto Hdr | Header Type C |
| | +------------------------------+
| | | BIT 31 | ... | BIT 1 | BIT 0 |
| |-----------+------------------------------+
| | .... |
+-------------------------------------------------------+
All fields in enum virtchnl_proto_hdr_fields are grouped with header type
and the value of the first field of a header type is always 32 aligned.
enum proto_hdr_type {
header_type_A = 0;
header_type_B = 1;
....
}
enum proto_hdr_field {
/* header type A */
header_A_field_0 = 0,
header_A_field_1 = 1,
header_A_field_2 = 2,
header_A_field_3 = 3,
/* header type B */
header_B_field_0 = 32, // = header_type_B << 5
header_B_field_0 = 33,
header_B_field_0 = 34
header_B_field_0 = 35,
....
};
So we have:
proto_hdr_type = proto_hdr_field / 32
bit offset = proto_hdr_field % 32
To simply the protocol header's operations, couple help macros are added.
For example, to select src IP and dst port as input set for an IPv4/UDP
flow.
we have:
struct virtchnl_proto_hdr hdr[2];
VIRTCHNL_SET_PROTO_HDR_TYPE(&hdr[0], IPV4)
VIRTCHNL_ADD_PROTO_HDR_FIELD(&hdr[0], IPV4, SRC)
VIRTCHNL_SET_PROTO_HDR_TYPE(&hdr[1], UDP)
VIRTCHNL_ADD_PROTO_HDR_FIELD(&hdr[1], UDP, DST)
The byte array is used to store the protocol header of a training package.
The byte array must be network order.
The patch added virtual channel support for iAVF FDIR add/validate/delete
filter. iAVF FDIR is Flow Director for Intel Adaptive Virtual Function
which can direct Ethernet packets to the queues of the Network Interface
Card. Add/delete command is adding or deleting one rule for each virtual
channel message, while validate command is just verifying if this rule
is valid without any other operations.
To add or delete one rule, driver needs to config TCAM and Profile,
build training packets which contains the input set value, and send
the training packets through FDIR Tx queue. In addition, driver needs to
manage the software context to avoid adding duplicated rules, deleting
non-existent rule, input set conflicts and other invalid cases.
NOTE:
Supported pattern/actions and their parse functions are not be included in
this patch, they will be added in a separate one.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Signed-off-by: Yahui Cao <yahui.cao@intel.com>
Signed-off-by: Simei Su <simei.su@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Chen Bo <BoX.C.Chen@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-03-09 11:08:04 +08:00
|
|
|
ice_vf_fdir_exit(vf);
|
2021-03-09 11:08:03 +08:00
|
|
|
/* free VF control VSI */
|
|
|
|
if (vf->ctrl_vsi_idx != ICE_NO_VSI)
|
|
|
|
ice_vf_ctrl_vsi_release(vf);
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2019-04-16 10:35:03 -07:00
|
|
|
/* free VSI and disconnect it from the parent uplink */
|
2020-05-15 17:51:16 -07:00
|
|
|
if (vf->lan_vsi_idx != ICE_NO_VSI) {
|
|
|
|
ice_vf_vsi_release(vf);
|
2018-09-19 17:42:55 -07:00
|
|
|
vf->num_mac = 0;
|
|
|
|
}
|
|
|
|
|
2023-10-19 10:32:20 -07:00
|
|
|
last_vector_idx = vf->first_vector_idx + vf->num_msix - 1;
|
2020-02-13 13:31:16 -08:00
|
|
|
|
|
|
|
/* clear VF MDD event information */
|
|
|
|
memset(&vf->mdd_tx_events, 0, sizeof(vf->mdd_tx_events));
|
|
|
|
memset(&vf->mdd_rx_events, 0, sizeof(vf->mdd_rx_events));
|
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/* Disable interrupts so that VF starts in a known state */
|
2019-04-16 10:30:52 -07:00
|
|
|
for (i = vf->first_vector_idx; i <= last_vector_idx; i++) {
|
|
|
|
wr32(&pf->hw, GLINT_DYN_CTL(i), GLINT_DYN_CTL_CLEARPBA_M);
|
2018-09-19 17:42:55 -07:00
|
|
|
ice_flush(&pf->hw);
|
|
|
|
}
|
|
|
|
/* reset some of the state variables keeping track of the resources */
|
|
|
|
clear_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states);
|
|
|
|
clear_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_dis_vf_mappings
|
|
|
|
* @vf: pointer to the VF structure
|
|
|
|
*/
|
|
|
|
static void ice_dis_vf_mappings(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = vf->pf;
|
|
|
|
struct ice_vsi *vsi;
|
2019-11-08 06:23:26 -08:00
|
|
|
struct device *dev;
|
2018-09-19 17:42:55 -07:00
|
|
|
int first, last, v;
|
|
|
|
struct ice_hw *hw;
|
|
|
|
|
|
|
|
hw = &pf->hw;
|
2021-03-02 10:15:39 -08:00
|
|
|
vsi = ice_get_vf_vsi(vf);
|
2022-04-11 16:29:03 -07:00
|
|
|
if (WARN_ON(!vsi))
|
|
|
|
return;
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2019-11-08 06:23:26 -08:00
|
|
|
dev = ice_pf_to_dev(pf);
|
2018-09-19 17:42:55 -07:00
|
|
|
wr32(hw, VPINT_ALLOC(vf->vf_id), 0);
|
2018-10-18 08:37:08 -07:00
|
|
|
wr32(hw, VPINT_ALLOC_PCI(vf->vf_id), 0);
|
2018-09-19 17:42:55 -07:00
|
|
|
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
first = vf->first_vector_idx;
|
2023-10-19 10:32:20 -07:00
|
|
|
last = first + vf->num_msix - 1;
|
2018-09-19 17:42:55 -07:00
|
|
|
for (v = first; v <= last; v++) {
|
|
|
|
u32 reg;
|
|
|
|
|
2023-12-05 17:01:05 -08:00
|
|
|
reg = FIELD_PREP(GLINT_VECT2FUNC_IS_PF_M, 1) |
|
|
|
|
FIELD_PREP(GLINT_VECT2FUNC_PF_NUM_M, hw->pf_id);
|
2018-09-19 17:42:55 -07:00
|
|
|
wr32(hw, GLINT_VECT2FUNC(v), reg);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (vsi->tx_mapping_mode == ICE_VSI_MAP_CONTIG)
|
|
|
|
wr32(hw, VPLAN_TX_QBASE(vf->vf_id), 0);
|
|
|
|
else
|
2019-11-08 06:23:26 -08:00
|
|
|
dev_err(dev, "Scattered mode for VF Tx queues is not yet implemented\n");
|
2018-09-19 17:42:55 -07:00
|
|
|
|
|
|
|
if (vsi->rx_mapping_mode == ICE_VSI_MAP_CONTIG)
|
|
|
|
wr32(hw, VPLAN_RX_QBASE(vf->vf_id), 0);
|
|
|
|
else
|
2020-02-06 01:20:10 -08:00
|
|
|
dev_err(dev, "Scattered mode for VF Rx queues is not yet implemented\n");
|
2018-09-19 17:42:55 -07:00
|
|
|
}
|
|
|
|
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
/**
|
|
|
|
* ice_sriov_free_msix_res - Reset/free any used MSIX resources
|
|
|
|
* @pf: pointer to the PF structure
|
|
|
|
*
|
2020-02-27 10:14:52 -08:00
|
|
|
* Since no MSIX entries are taken from the pf->irq_tracker then just clear
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
* the pf->sriov_base_vector.
|
|
|
|
*
|
|
|
|
* Returns 0 on success, and -EINVAL on error.
|
|
|
|
*/
|
|
|
|
static int ice_sriov_free_msix_res(struct ice_pf *pf)
|
|
|
|
{
|
|
|
|
if (!pf)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2023-10-19 10:32:21 -07:00
|
|
|
bitmap_free(pf->sriov_irq_bm);
|
|
|
|
pf->sriov_irq_size = 0;
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
pf->sriov_base_vector = 0;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/**
|
|
|
|
* ice_free_vfs - Free all VFs
|
|
|
|
* @pf: pointer to the PF structure
|
|
|
|
*/
|
|
|
|
void ice_free_vfs(struct ice_pf *pf)
|
|
|
|
{
|
2019-11-08 06:23:26 -08:00
|
|
|
struct device *dev = ice_pf_to_dev(pf);
|
2022-02-16 13:37:36 -08:00
|
|
|
struct ice_vfs *vfs = &pf->vfs;
|
2018-09-19 17:42:55 -07:00
|
|
|
struct ice_hw *hw = &pf->hw;
|
2022-02-16 13:37:35 -08:00
|
|
|
struct ice_vf *vf;
|
|
|
|
unsigned int bkt;
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2022-02-16 13:37:37 -08:00
|
|
|
if (!ice_has_vfs(pf))
|
2018-09-19 17:42:55 -07:00
|
|
|
return;
|
|
|
|
|
2021-03-02 10:15:38 -08:00
|
|
|
while (test_and_set_bit(ICE_VF_DIS, pf->state))
|
2018-09-19 17:42:55 -07:00
|
|
|
usleep_range(1000, 2000);
|
|
|
|
|
2019-04-16 10:30:52 -07:00
|
|
|
/* Disable IOV before freeing resources. This lets any VF drivers
|
|
|
|
* running in the host get themselves cleaned up before we yank
|
|
|
|
* the carpet out from underneath their feet.
|
|
|
|
*/
|
|
|
|
if (!pci_vfs_assigned(pf->pdev))
|
|
|
|
pci_disable_sriov(pf->pdev);
|
|
|
|
else
|
2019-11-08 06:23:26 -08:00
|
|
|
dev_warn(dev, "VFs are assigned - not disabling SR-IOV\n");
|
2019-04-16 10:30:52 -07:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
mutex_lock(&vfs->table_lock);
|
|
|
|
|
2022-02-16 13:37:35 -08:00
|
|
|
ice_for_each_vf(pf, bkt, vf) {
|
2022-02-07 10:23:29 -08:00
|
|
|
mutex_lock(&vf->cfg_lock);
|
|
|
|
|
2023-10-24 13:09:27 +02:00
|
|
|
ice_eswitch_detach(pf, vf);
|
2022-02-07 10:23:29 -08:00
|
|
|
ice_dis_vf_qs(vf);
|
|
|
|
|
|
|
|
if (test_bit(ICE_VF_STATE_INIT, vf->vf_states)) {
|
2019-11-08 06:23:22 -08:00
|
|
|
/* disable VF qp mappings and set VF disable state */
|
2022-02-07 10:23:29 -08:00
|
|
|
ice_dis_vf_mappings(vf);
|
|
|
|
set_bit(ICE_VF_STATE_DIS, vf->vf_states);
|
|
|
|
ice_free_vf_res(vf);
|
2018-09-19 17:42:55 -07:00
|
|
|
}
|
2021-09-09 14:38:09 -07:00
|
|
|
|
2022-02-16 13:37:32 -08:00
|
|
|
if (!pci_vfs_assigned(pf->pdev)) {
|
|
|
|
u32 reg_idx, bit_idx;
|
|
|
|
|
|
|
|
reg_idx = (hw->func_caps.vf_base_id + vf->vf_id) / 32;
|
|
|
|
bit_idx = (hw->func_caps.vf_base_id + vf->vf_id) % 32;
|
|
|
|
wr32(hw, GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx));
|
|
|
|
}
|
|
|
|
|
2022-02-16 13:37:31 -08:00
|
|
|
/* clear malicious info since the VF is getting released */
|
2023-02-22 09:09:10 -08:00
|
|
|
list_del(&vf->mbx_info.list_entry);
|
2022-02-16 13:37:31 -08:00
|
|
|
|
2022-02-07 10:23:29 -08:00
|
|
|
mutex_unlock(&vf->cfg_lock);
|
2018-09-19 17:42:55 -07:00
|
|
|
}
|
|
|
|
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
if (ice_sriov_free_msix_res(pf))
|
2019-11-08 06:23:26 -08:00
|
|
|
dev_err(dev, "Failed to free MSIX resources used by SR-IOV\n");
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
|
2022-02-16 13:37:36 -08:00
|
|
|
vfs->num_qps_per = 0;
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
ice_free_vf_entries(pf);
|
|
|
|
|
|
|
|
mutex_unlock(&vfs->table_lock);
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2021-03-02 10:15:38 -08:00
|
|
|
clear_bit(ICE_VF_DIS, pf->state);
|
2018-09-19 17:42:55 -07:00
|
|
|
clear_bit(ICE_FLAG_SRIOV_ENA, pf->flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_vf_vsi_setup - Set up a VF VSI
|
2020-05-15 17:51:16 -07:00
|
|
|
* @vf: VF to setup VSI for
|
2018-09-19 17:42:55 -07:00
|
|
|
*
|
|
|
|
* Returns pointer to the successfully allocated VSI struct on success,
|
|
|
|
* otherwise returns NULL on failure.
|
|
|
|
*/
|
2020-05-15 17:51:16 -07:00
|
|
|
static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf *vf)
|
2018-09-19 17:42:55 -07:00
|
|
|
{
|
ice: refactor VSI setup to use parameter structure
The ice_vsi_setup function, ice_vsi_alloc, and ice_vsi_cfg functions have
grown a large number of parameters. These parameters are used to initialize
a new VSI, as well as re-configure an existing VSI
Any time we want to add a new parameter to this function chain, even if it
will usually be unset, we have to change many call sites due to changing
the function signature.
A future change is going to refactor ice_vsi_alloc and ice_vsi_cfg to move
the VSI configuration and initialization all into ice_vsi_cfg.
Before this, refactor the VSI setup flow to use a new ice_vsi_cfg_params
structure. This will contain the configuration (mainly pointers) used to
initialize a VSI.
Pass this from ice_vsi_setup into the related functions such as
ice_vsi_alloc, ice_vsi_cfg, and ice_vsi_cfg_def.
Introduce a helper, ice_vsi_to_params to convert an existing VSI to the
parameters used to initialize it. This will aid in the flows where we
rebuild an existing VSI.
Since we also pass the ICE_VSI_FLAG_INIT to more functions which do not
need (or cannot yet have) the VSI parameters, lets make this clear by
renaming the function parameter to vsi_flags and using a u32 instead of a
signed integer. The name vsi_flags also makes it clear that we may extend
the flags in the future.
This change will make it easier to refactor the setup flow in the future,
and will reduce the complexity required to add a new parameter for
configuration in the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-01-18 17:16:43 -08:00
|
|
|
struct ice_vsi_cfg_params params = {};
|
2020-05-15 17:51:16 -07:00
|
|
|
struct ice_pf *pf = vf->pf;
|
|
|
|
struct ice_vsi *vsi;
|
|
|
|
|
ice: refactor VSI setup to use parameter structure
The ice_vsi_setup function, ice_vsi_alloc, and ice_vsi_cfg functions have
grown a large number of parameters. These parameters are used to initialize
a new VSI, as well as re-configure an existing VSI
Any time we want to add a new parameter to this function chain, even if it
will usually be unset, we have to change many call sites due to changing
the function signature.
A future change is going to refactor ice_vsi_alloc and ice_vsi_cfg to move
the VSI configuration and initialization all into ice_vsi_cfg.
Before this, refactor the VSI setup flow to use a new ice_vsi_cfg_params
structure. This will contain the configuration (mainly pointers) used to
initialize a VSI.
Pass this from ice_vsi_setup into the related functions such as
ice_vsi_alloc, ice_vsi_cfg, and ice_vsi_cfg_def.
Introduce a helper, ice_vsi_to_params to convert an existing VSI to the
parameters used to initialize it. This will aid in the flows where we
rebuild an existing VSI.
Since we also pass the ICE_VSI_FLAG_INIT to more functions which do not
need (or cannot yet have) the VSI parameters, lets make this clear by
renaming the function parameter to vsi_flags and using a u32 instead of a
signed integer. The name vsi_flags also makes it clear that we may extend
the flags in the future.
This change will make it easier to refactor the setup flow in the future,
and will reduce the complexity required to add a new parameter for
configuration in the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-01-18 17:16:43 -08:00
|
|
|
params.type = ICE_VSI_VF;
|
2024-04-19 05:11:04 -04:00
|
|
|
params.port_info = ice_vf_get_port_info(vf);
|
ice: refactor VSI setup to use parameter structure
The ice_vsi_setup function, ice_vsi_alloc, and ice_vsi_cfg functions have
grown a large number of parameters. These parameters are used to initialize
a new VSI, as well as re-configure an existing VSI
Any time we want to add a new parameter to this function chain, even if it
will usually be unset, we have to change many call sites due to changing
the function signature.
A future change is going to refactor ice_vsi_alloc and ice_vsi_cfg to move
the VSI configuration and initialization all into ice_vsi_cfg.
Before this, refactor the VSI setup flow to use a new ice_vsi_cfg_params
structure. This will contain the configuration (mainly pointers) used to
initialize a VSI.
Pass this from ice_vsi_setup into the related functions such as
ice_vsi_alloc, ice_vsi_cfg, and ice_vsi_cfg_def.
Introduce a helper, ice_vsi_to_params to convert an existing VSI to the
parameters used to initialize it. This will aid in the flows where we
rebuild an existing VSI.
Since we also pass the ICE_VSI_FLAG_INIT to more functions which do not
need (or cannot yet have) the VSI parameters, lets make this clear by
renaming the function parameter to vsi_flags and using a u32 instead of a
signed integer. The name vsi_flags also makes it clear that we may extend
the flags in the future.
This change will make it easier to refactor the setup flow in the future,
and will reduce the complexity required to add a new parameter for
configuration in the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-01-18 17:16:43 -08:00
|
|
|
params.vf = vf;
|
|
|
|
params.flags = ICE_VSI_FLAG_INIT;
|
|
|
|
|
|
|
|
vsi = ice_vsi_setup(pf, ¶ms);
|
2020-05-15 17:51:16 -07:00
|
|
|
|
|
|
|
if (!vsi) {
|
|
|
|
dev_err(ice_pf_to_dev(pf), "Failed to create VF VSI\n");
|
|
|
|
ice_vf_invalidate_vsi(vf);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
vf->lan_vsi_idx = vsi->idx;
|
|
|
|
|
|
|
|
return vsi;
|
2018-09-19 17:42:55 -07:00
|
|
|
}
|
|
|
|
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/**
|
2020-05-15 17:51:07 -07:00
|
|
|
* ice_ena_vf_msix_mappings - enable VF MSIX mappings in hardware
|
|
|
|
* @vf: VF to enable MSIX mappings for
|
2018-09-19 17:42:55 -07:00
|
|
|
*
|
2020-05-15 17:51:07 -07:00
|
|
|
* Some of the registers need to be indexed/configured using hardware global
|
|
|
|
* device values and other registers need 0-based values, which represent PF
|
|
|
|
* based values.
|
2018-09-19 17:42:55 -07:00
|
|
|
*/
|
2020-05-15 17:51:07 -07:00
|
|
|
static void ice_ena_vf_msix_mappings(struct ice_vf *vf)
|
2018-09-19 17:42:55 -07:00
|
|
|
{
|
2020-05-15 17:51:07 -07:00
|
|
|
int device_based_first_msix, device_based_last_msix;
|
|
|
|
int pf_based_first_msix, pf_based_last_msix, v;
|
2018-09-19 17:42:55 -07:00
|
|
|
struct ice_pf *pf = vf->pf;
|
2020-05-15 17:51:07 -07:00
|
|
|
int device_based_vf_id;
|
2018-09-19 17:42:55 -07:00
|
|
|
struct ice_hw *hw;
|
|
|
|
u32 reg;
|
|
|
|
|
|
|
|
hw = &pf->hw;
|
2020-05-15 17:51:07 -07:00
|
|
|
pf_based_first_msix = vf->first_vector_idx;
|
2023-10-19 10:32:20 -07:00
|
|
|
pf_based_last_msix = (pf_based_first_msix + vf->num_msix) - 1;
|
2020-05-15 17:51:07 -07:00
|
|
|
|
|
|
|
device_based_first_msix = pf_based_first_msix +
|
|
|
|
pf->hw.func_caps.common_cap.msix_vector_first_id;
|
|
|
|
device_based_last_msix =
|
2023-10-19 10:32:20 -07:00
|
|
|
(device_based_first_msix + vf->num_msix) - 1;
|
2020-05-15 17:51:07 -07:00
|
|
|
device_based_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
|
|
|
|
|
2023-12-05 17:01:05 -08:00
|
|
|
reg = FIELD_PREP(VPINT_ALLOC_FIRST_M, device_based_first_msix) |
|
|
|
|
FIELD_PREP(VPINT_ALLOC_LAST_M, device_based_last_msix) |
|
|
|
|
VPINT_ALLOC_VALID_M;
|
2018-09-19 17:42:55 -07:00
|
|
|
wr32(hw, VPINT_ALLOC(vf->vf_id), reg);
|
|
|
|
|
2023-12-05 17:01:05 -08:00
|
|
|
reg = FIELD_PREP(VPINT_ALLOC_PCI_FIRST_M, device_based_first_msix) |
|
|
|
|
FIELD_PREP(VPINT_ALLOC_PCI_LAST_M, device_based_last_msix) |
|
|
|
|
VPINT_ALLOC_PCI_VALID_M;
|
2018-10-18 08:37:08 -07:00
|
|
|
wr32(hw, VPINT_ALLOC_PCI(vf->vf_id), reg);
|
2020-05-15 17:51:07 -07:00
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/* map the interrupts to its functions */
|
2020-05-15 17:51:07 -07:00
|
|
|
for (v = pf_based_first_msix; v <= pf_based_last_msix; v++) {
|
2023-12-05 17:01:05 -08:00
|
|
|
reg = FIELD_PREP(GLINT_VECT2FUNC_VF_NUM_M, device_based_vf_id) |
|
|
|
|
FIELD_PREP(GLINT_VECT2FUNC_PF_NUM_M, hw->pf_id);
|
2018-09-19 17:42:55 -07:00
|
|
|
wr32(hw, GLINT_VECT2FUNC(v), reg);
|
|
|
|
}
|
|
|
|
|
2020-05-15 17:51:07 -07:00
|
|
|
/* Map mailbox interrupt to VF MSI-X vector 0 */
|
|
|
|
wr32(hw, VPINT_MBX_CTL(device_based_vf_id), VPINT_MBX_CTL_CAUSE_ENA_M);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_ena_vf_q_mappings - enable Rx/Tx queue mappings for a VF
|
|
|
|
* @vf: VF to enable the mappings for
|
|
|
|
* @max_txq: max Tx queues allowed on the VF's VSI
|
|
|
|
* @max_rxq: max Rx queues allowed on the VF's VSI
|
|
|
|
*/
|
|
|
|
static void ice_ena_vf_q_mappings(struct ice_vf *vf, u16 max_txq, u16 max_rxq)
|
|
|
|
{
|
|
|
|
struct device *dev = ice_pf_to_dev(vf->pf);
|
2021-03-02 10:15:39 -08:00
|
|
|
struct ice_vsi *vsi = ice_get_vf_vsi(vf);
|
2020-05-15 17:51:07 -07:00
|
|
|
struct ice_hw *hw = &vf->pf->hw;
|
|
|
|
u32 reg;
|
|
|
|
|
2022-04-11 16:29:03 -07:00
|
|
|
if (WARN_ON(!vsi))
|
|
|
|
return;
|
|
|
|
|
2018-10-18 08:37:08 -07:00
|
|
|
/* set regardless of mapping mode */
|
|
|
|
wr32(hw, VPLAN_TXQ_MAPENA(vf->vf_id), VPLAN_TXQ_MAPENA_TX_ENA_M);
|
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/* VF Tx queues allocation */
|
|
|
|
if (vsi->tx_mapping_mode == ICE_VSI_MAP_CONTIG) {
|
|
|
|
/* set the VF PF Tx queue range
|
|
|
|
* VFNUMQ value should be set to (number of queues - 1). A value
|
|
|
|
* of 0 means 1 queue and a value of 255 means 256 queues
|
|
|
|
*/
|
2023-12-05 17:01:05 -08:00
|
|
|
reg = FIELD_PREP(VPLAN_TX_QBASE_VFFIRSTQ_M, vsi->txq_map[0]) |
|
|
|
|
FIELD_PREP(VPLAN_TX_QBASE_VFNUMQ_M, max_txq - 1);
|
2018-09-19 17:42:55 -07:00
|
|
|
wr32(hw, VPLAN_TX_QBASE(vf->vf_id), reg);
|
|
|
|
} else {
|
2019-11-08 06:23:26 -08:00
|
|
|
dev_err(dev, "Scattered mode for VF Tx queues is not yet implemented\n");
|
2018-09-19 17:42:55 -07:00
|
|
|
}
|
|
|
|
|
2018-10-18 08:37:08 -07:00
|
|
|
/* set regardless of mapping mode */
|
|
|
|
wr32(hw, VPLAN_RXQ_MAPENA(vf->vf_id), VPLAN_RXQ_MAPENA_RX_ENA_M);
|
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/* VF Rx queues allocation */
|
|
|
|
if (vsi->rx_mapping_mode == ICE_VSI_MAP_CONTIG) {
|
|
|
|
/* set the VF PF Rx queue range
|
|
|
|
* VFNUMQ value should be set to (number of queues - 1). A value
|
|
|
|
* of 0 means 1 queue and a value of 255 means 256 queues
|
|
|
|
*/
|
2023-12-05 17:01:05 -08:00
|
|
|
reg = FIELD_PREP(VPLAN_RX_QBASE_VFFIRSTQ_M, vsi->rxq_map[0]) |
|
|
|
|
FIELD_PREP(VPLAN_RX_QBASE_VFNUMQ_M, max_rxq - 1);
|
2018-09-19 17:42:55 -07:00
|
|
|
wr32(hw, VPLAN_RX_QBASE(vf->vf_id), reg);
|
|
|
|
} else {
|
2019-11-08 06:23:26 -08:00
|
|
|
dev_err(dev, "Scattered mode for VF Rx queues is not yet implemented\n");
|
2018-09-19 17:42:55 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-05-15 17:51:07 -07:00
|
|
|
/**
|
|
|
|
* ice_ena_vf_mappings - enable VF MSIX and queue mapping
|
|
|
|
* @vf: pointer to the VF structure
|
|
|
|
*/
|
|
|
|
static void ice_ena_vf_mappings(struct ice_vf *vf)
|
|
|
|
{
|
2021-03-02 10:15:39 -08:00
|
|
|
struct ice_vsi *vsi = ice_get_vf_vsi(vf);
|
2020-05-15 17:51:07 -07:00
|
|
|
|
2022-04-11 16:29:03 -07:00
|
|
|
if (WARN_ON(!vsi))
|
|
|
|
return;
|
|
|
|
|
2020-05-15 17:51:07 -07:00
|
|
|
ice_ena_vf_msix_mappings(vf);
|
|
|
|
ice_ena_vf_q_mappings(vf, vsi->alloc_txq, vsi->alloc_rxq);
|
|
|
|
}
|
|
|
|
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
/**
|
|
|
|
* ice_calc_vf_reg_idx - Calculate the VF's register index in the PF space
|
|
|
|
* @vf: VF to calculate the register index for
|
|
|
|
* @q_vector: a q_vector associated to the VF
|
|
|
|
*/
|
ice: store VF relative MSI-X index in q_vector->vf_reg_idx
The ice physical function driver needs to configure the association of
queues and interrupts on behalf of its virtual functions. This is done over
virtchnl by the VF sending messages during its initialization phase. These
messages contain a vector_id which the VF wants to associate with a given
queue. This ID is relative to the VF space, where 0 indicates the control
IRQ for non-queue interrupts.
When programming the mapping, the PF driver currently passes this vector_id
directly to the low level functions for programming. This works for SR-IOV,
because the hardware uses the VF-based indexing for interrupts.
This won't work for Scalable IOV, which uses PF-based indexing for
programming its VSIs. To handle this, the driver needs to be able to look
up the proper index to use for programming. For typical IRQs, this would be
the q_vector->reg_idx field.
The q_vector->reg_idx can't be set to a VF relative value, because it is
used when the PF needs to control the interrupt, such as when triggering a
software interrupt on stopping the Tx queue. Thus, introduce a new
q_vector->vf_reg_idx which can store the VF relative index for registers
which expect this.
Use this in ice_cfg_interrupt to look up the VF index from the q_vector.
This allows removing the vector ID parameter of ice_cfg_interrupt. Also
notice that this function returns an int, but then is cast to the virtchnl
error enumeration, virtchnl_status_code. Update the return type to indicate
it does not return an integer error code. We can't use normal error codes
here because the return values are passed across the virtchnl interface.
This will allow the future Scalable IOV VFs to correctly look up the index
needed for programming the VF queues without breaking SR-IOV.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Rafal Romanowski <rafal.romanowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-22 14:44:45 -07:00
|
|
|
void ice_calc_vf_reg_idx(struct ice_vf *vf, struct ice_q_vector *q_vector)
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
{
|
|
|
|
if (!vf || !q_vector)
|
ice: store VF relative MSI-X index in q_vector->vf_reg_idx
The ice physical function driver needs to configure the association of
queues and interrupts on behalf of its virtual functions. This is done over
virtchnl by the VF sending messages during its initialization phase. These
messages contain a vector_id which the VF wants to associate with a given
queue. This ID is relative to the VF space, where 0 indicates the control
IRQ for non-queue interrupts.
When programming the mapping, the PF driver currently passes this vector_id
directly to the low level functions for programming. This works for SR-IOV,
because the hardware uses the VF-based indexing for interrupts.
This won't work for Scalable IOV, which uses PF-based indexing for
programming its VSIs. To handle this, the driver needs to be able to look
up the proper index to use for programming. For typical IRQs, this would be
the q_vector->reg_idx field.
The q_vector->reg_idx can't be set to a VF relative value, because it is
used when the PF needs to control the interrupt, such as when triggering a
software interrupt on stopping the Tx queue. Thus, introduce a new
q_vector->vf_reg_idx which can store the VF relative index for registers
which expect this.
Use this in ice_cfg_interrupt to look up the VF index from the q_vector.
This allows removing the vector ID parameter of ice_cfg_interrupt. Also
notice that this function returns an int, but then is cast to the virtchnl
error enumeration, virtchnl_status_code. Update the return type to indicate
it does not return an integer error code. We can't use normal error codes
here because the return values are passed across the virtchnl interface.
This will allow the future Scalable IOV VFs to correctly look up the index
needed for programming the VF queues without breaking SR-IOV.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Rafal Romanowski <rafal.romanowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-22 14:44:45 -07:00
|
|
|
return;
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
|
|
|
|
/* always add one to account for the OICR being the first MSIX */
|
ice: store VF relative MSI-X index in q_vector->vf_reg_idx
The ice physical function driver needs to configure the association of
queues and interrupts on behalf of its virtual functions. This is done over
virtchnl by the VF sending messages during its initialization phase. These
messages contain a vector_id which the VF wants to associate with a given
queue. This ID is relative to the VF space, where 0 indicates the control
IRQ for non-queue interrupts.
When programming the mapping, the PF driver currently passes this vector_id
directly to the low level functions for programming. This works for SR-IOV,
because the hardware uses the VF-based indexing for interrupts.
This won't work for Scalable IOV, which uses PF-based indexing for
programming its VSIs. To handle this, the driver needs to be able to look
up the proper index to use for programming. For typical IRQs, this would be
the q_vector->reg_idx field.
The q_vector->reg_idx can't be set to a VF relative value, because it is
used when the PF needs to control the interrupt, such as when triggering a
software interrupt on stopping the Tx queue. Thus, introduce a new
q_vector->vf_reg_idx which can store the VF relative index for registers
which expect this.
Use this in ice_cfg_interrupt to look up the VF index from the q_vector.
This allows removing the vector ID parameter of ice_cfg_interrupt. Also
notice that this function returns an int, but then is cast to the virtchnl
error enumeration, virtchnl_status_code. Update the return type to indicate
it does not return an integer error code. We can't use normal error codes
here because the return values are passed across the virtchnl interface.
This will allow the future Scalable IOV VFs to correctly look up the index
needed for programming the VF queues without breaking SR-IOV.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Rafal Romanowski <rafal.romanowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-22 14:44:45 -07:00
|
|
|
q_vector->vf_reg_idx = q_vector->v_idx + ICE_NONQ_VECS_VF;
|
|
|
|
q_vector->reg_idx = vf->first_vector_idx + q_vector->vf_reg_idx;
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_sriov_set_msix_res - Set any used MSIX resources
|
|
|
|
* @pf: pointer to PF structure
|
|
|
|
* @num_msix_needed: number of MSIX vectors needed for all SR-IOV VFs
|
|
|
|
*
|
|
|
|
* This function allows SR-IOV resources to be taken from the end of the PF's
|
2020-02-27 10:14:52 -08:00
|
|
|
* allowed HW MSIX vectors so that the irq_tracker will not be affected. We
|
|
|
|
* just set the pf->sriov_base_vector and return success.
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
*
|
2020-02-27 10:14:52 -08:00
|
|
|
* If there are not enough resources available, return an error. This should
|
|
|
|
* always be caught by ice_set_per_vf_res().
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
*
|
2020-10-07 10:54:41 -07:00
|
|
|
* Return 0 on success, and -EINVAL when there are not enough MSIX vectors
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
* in the PF's space available for SR-IOV.
|
|
|
|
*/
|
|
|
|
static int ice_sriov_set_msix_res(struct ice_pf *pf, u16 num_msix_needed)
|
|
|
|
{
|
2020-02-27 10:14:52 -08:00
|
|
|
u16 total_vectors = pf->hw.func_caps.common_cap.num_msix_vectors;
|
2023-05-15 21:03:19 +02:00
|
|
|
int vectors_used = ice_get_max_used_msix_vector(pf);
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
int sriov_base_vector;
|
|
|
|
|
2020-02-27 10:14:52 -08:00
|
|
|
sriov_base_vector = total_vectors - num_msix_needed;
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
|
|
|
|
/* make sure we only grab irq_tracker entries from the list end and
|
|
|
|
* that we have enough available MSIX vectors
|
|
|
|
*/
|
2020-02-27 10:14:52 -08:00
|
|
|
if (sriov_base_vector < vectors_used)
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
pf->sriov_base_vector = sriov_base_vector;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/**
|
2020-02-27 10:14:52 -08:00
|
|
|
* ice_set_per_vf_res - check if vectors and queues are available
|
2018-09-19 17:42:55 -07:00
|
|
|
* @pf: pointer to the PF structure
|
2022-02-16 13:37:30 -08:00
|
|
|
* @num_vfs: the number of SR-IOV VFs being configured
|
2018-09-19 17:42:55 -07:00
|
|
|
*
|
2020-02-27 10:14:52 -08:00
|
|
|
* First, determine HW interrupts from common pool. If we allocate fewer VFs, we
|
|
|
|
* get more vectors and can enable more queues per VF. Note that this does not
|
|
|
|
* grab any vectors from the SW pool already allocated. Also note, that all
|
|
|
|
* vector counts include one for each VF's miscellaneous interrupt vector
|
|
|
|
* (i.e. OICR).
|
|
|
|
*
|
|
|
|
* Minimum VFs - 2 vectors, 1 queue pair
|
|
|
|
* Small VFs - 5 vectors, 4 queue pairs
|
|
|
|
* Medium VFs - 17 vectors, 16 queue pairs
|
|
|
|
*
|
|
|
|
* Second, determine number of queue pairs per VF by starting with a pre-defined
|
|
|
|
* maximum each VF supports. If this is not possible, then we adjust based on
|
|
|
|
* queue pairs available on the device.
|
|
|
|
*
|
|
|
|
* Lastly, set queue and MSI-X VF variables tracked by the PF so it can be used
|
|
|
|
* by each VF during VF initialization and reset.
|
2018-09-19 17:42:55 -07:00
|
|
|
*/
|
2022-02-16 13:37:30 -08:00
|
|
|
static int ice_set_per_vf_res(struct ice_pf *pf, u16 num_vfs)
|
2018-09-19 17:42:55 -07:00
|
|
|
{
|
2023-05-15 21:03:19 +02:00
|
|
|
int vectors_used = ice_get_max_used_msix_vector(pf);
|
2022-02-16 13:37:30 -08:00
|
|
|
u16 num_msix_per_vf, num_txq, num_rxq, avail_qs;
|
2020-02-27 10:14:53 -08:00
|
|
|
int msix_avail_per_vf, msix_avail_for_sriov;
|
2019-11-08 06:23:26 -08:00
|
|
|
struct device *dev = ice_pf_to_dev(pf);
|
2022-02-22 16:26:56 -08:00
|
|
|
int err;
|
2018-09-19 17:42:55 -07:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
lockdep_assert_held(&pf->vfs.table_lock);
|
|
|
|
|
2022-02-22 16:26:56 -08:00
|
|
|
if (!num_vfs)
|
2018-09-19 17:42:55 -07:00
|
|
|
return -EINVAL;
|
|
|
|
|
2020-02-27 10:14:52 -08:00
|
|
|
/* determine MSI-X resources per VF */
|
2020-02-27 10:14:53 -08:00
|
|
|
msix_avail_for_sriov = pf->hw.func_caps.common_cap.num_msix_vectors -
|
2023-05-15 21:03:19 +02:00
|
|
|
vectors_used;
|
2022-02-16 13:37:30 -08:00
|
|
|
msix_avail_per_vf = msix_avail_for_sriov / num_vfs;
|
2020-02-27 10:14:53 -08:00
|
|
|
if (msix_avail_per_vf >= ICE_NUM_VF_MSIX_MED) {
|
|
|
|
num_msix_per_vf = ICE_NUM_VF_MSIX_MED;
|
|
|
|
} else if (msix_avail_per_vf >= ICE_NUM_VF_MSIX_SMALL) {
|
|
|
|
num_msix_per_vf = ICE_NUM_VF_MSIX_SMALL;
|
2020-07-29 17:19:16 -07:00
|
|
|
} else if (msix_avail_per_vf >= ICE_NUM_VF_MSIX_MULTIQ_MIN) {
|
|
|
|
num_msix_per_vf = ICE_NUM_VF_MSIX_MULTIQ_MIN;
|
2020-02-27 10:14:53 -08:00
|
|
|
} else if (msix_avail_per_vf >= ICE_MIN_INTR_PER_VF) {
|
|
|
|
num_msix_per_vf = ICE_MIN_INTR_PER_VF;
|
2018-09-19 17:42:55 -07:00
|
|
|
} else {
|
2020-02-27 10:14:53 -08:00
|
|
|
dev_err(dev, "Only %d MSI-X interrupts available for SR-IOV. Not enough to support minimum of %d MSI-X interrupts per VF for %d VFs\n",
|
|
|
|
msix_avail_for_sriov, ICE_MIN_INTR_PER_VF,
|
2022-02-16 13:37:30 -08:00
|
|
|
num_vfs);
|
2022-02-22 16:26:56 -08:00
|
|
|
return -ENOSPC;
|
2018-09-19 17:42:55 -07:00
|
|
|
}
|
|
|
|
|
2022-02-16 13:37:30 -08:00
|
|
|
num_txq = min_t(u16, num_msix_per_vf - ICE_NONQ_VECS_VF,
|
|
|
|
ICE_MAX_RSS_QS_PER_VF);
|
|
|
|
avail_qs = ice_get_avail_txq_count(pf) / num_vfs;
|
|
|
|
if (!avail_qs)
|
|
|
|
num_txq = 0;
|
|
|
|
else if (num_txq > avail_qs)
|
|
|
|
num_txq = rounddown_pow_of_two(avail_qs);
|
|
|
|
|
|
|
|
num_rxq = min_t(u16, num_msix_per_vf - ICE_NONQ_VECS_VF,
|
|
|
|
ICE_MAX_RSS_QS_PER_VF);
|
|
|
|
avail_qs = ice_get_avail_rxq_count(pf) / num_vfs;
|
|
|
|
if (!avail_qs)
|
|
|
|
num_rxq = 0;
|
|
|
|
else if (num_rxq > avail_qs)
|
|
|
|
num_rxq = rounddown_pow_of_two(avail_qs);
|
|
|
|
|
|
|
|
if (num_txq < ICE_MIN_QS_PER_VF || num_rxq < ICE_MIN_QS_PER_VF) {
|
2020-02-27 10:14:53 -08:00
|
|
|
dev_err(dev, "Not enough queues to support minimum of %d queue pairs per VF for %d VFs\n",
|
2022-02-16 13:37:30 -08:00
|
|
|
ICE_MIN_QS_PER_VF, num_vfs);
|
2022-02-22 16:26:56 -08:00
|
|
|
return -ENOSPC;
|
2020-02-27 10:14:52 -08:00
|
|
|
}
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2022-02-22 16:26:56 -08:00
|
|
|
err = ice_sriov_set_msix_res(pf, num_msix_per_vf * num_vfs);
|
|
|
|
if (err) {
|
|
|
|
dev_err(dev, "Unable to set MSI-X resources for %d VFs, err %d\n",
|
|
|
|
num_vfs, err);
|
|
|
|
return err;
|
2020-02-27 10:14:52 -08:00
|
|
|
}
|
ice: Refactor interrupt tracking
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
entries (sw_irq_tracker) and one for hardware MSI-x vectors
(hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
hw_irq_tracker because the hw_irq_tracker has entries equal to the max
allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
SR-IOV portion of the vectors, kernel granted IRQs). All of the non
SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
take at least one of each type of tracker resource. SR-IOV only grabs
entries from the hw_irq_tracker. There are a few issues with this approach
that can be seen when doing any kind of device reconfiguration (i.e.
ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
an ice_q_vector and associates it to a LAN queue pair it will grab and
use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
If the indices on these does not match it will cause a Tx timeout, which
will cause a reset and then the indices will match up again and traffic
will resume. The mismatched indices come from the trackers not being the
same size and/or the search_hint in the two trackers not being equal.
Another reason for the refactor is the co-existence of features with
SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
of the sw_irq_tracker then other features can no longer use this space
because the hardware has now given the remaining interrupts to SR-IOV.
This patch reworks how we track MSI-x vectors by removing the
hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
are determined all at once instead of per VF. This can be done because
when creating VFs we know how many are wanted and how many MSI-x vectors
each VF needs. This also allows us to start using MSI-x resources from
the end of the PF's allowed MSI-x vectors so we are less likely to use
entries needed for other features (i.e. RDMA, L2 Offload, etc).
This patch also reworks the ice_res_tracker structure by removing the
search_hint and adding a new member - "end". Instead of having a
search_hint we will always search from 0. The new member, "end", will be
used to manipulate the end of the ice_res_tracker (specifically
sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
In the normal case, the end of ice_res_tracker will be equal to the
ice_res_tracker's num_entries.
The sriov_base_vector member was added to the PF structure. It is used
to represent the starting MSI-x index of all the needed MSI-x vectors
for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
have to take resources from the sw_irq_tracker. This is done by setting
the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
calculate the first HW absolute MSI-x index for each VF, which is used
to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
within the PF's space.
Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
variables remain. Change all of these by removing the "sw_" prefix to
help avoid confusion with these variables and their use.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-16 10:30:44 -07:00
|
|
|
|
2020-02-27 10:14:52 -08:00
|
|
|
/* only allow equal Tx/Rx queue count (i.e. queue pairs) */
|
2022-02-16 13:37:36 -08:00
|
|
|
pf->vfs.num_qps_per = min_t(int, num_txq, num_rxq);
|
|
|
|
pf->vfs.num_msix_per = num_msix_per_vf;
|
2020-02-27 10:14:52 -08:00
|
|
|
dev_info(dev, "Enabling %d VFs with %d vectors and %d queues per VF\n",
|
2022-02-16 13:37:36 -08:00
|
|
|
num_vfs, pf->vfs.num_msix_per, pf->vfs.num_qps_per);
|
2018-09-19 17:42:55 -07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-10-19 10:32:23 -07:00
|
|
|
/**
|
|
|
|
* ice_sriov_get_irqs - get irqs for SR-IOV usacase
|
|
|
|
* @pf: pointer to PF structure
|
|
|
|
* @needed: number of irqs to get
|
|
|
|
*
|
|
|
|
* This returns the first MSI-X vector index in PF space that is used by this
|
|
|
|
* VF. This index is used when accessing PF relative registers such as
|
|
|
|
* GLINT_VECT2FUNC and GLINT_DYN_CTL.
|
|
|
|
* This will always be the OICR index in the AVF driver so any functionality
|
|
|
|
* using vf->first_vector_idx for queue configuration_id: id of VF which will
|
|
|
|
* use this irqs
|
|
|
|
*
|
|
|
|
* Only SRIOV specific vectors are tracked in sriov_irq_bm. SRIOV vectors are
|
|
|
|
* allocated from the end of global irq index. First bit in sriov_irq_bm means
|
|
|
|
* last irq index etc. It simplifies extension of SRIOV vectors.
|
|
|
|
* They will be always located from sriov_base_vector to the last irq
|
|
|
|
* index. While increasing/decreasing sriov_base_vector can be moved.
|
|
|
|
*/
|
|
|
|
static int ice_sriov_get_irqs(struct ice_pf *pf, u16 needed)
|
|
|
|
{
|
|
|
|
int res = bitmap_find_next_zero_area(pf->sriov_irq_bm,
|
|
|
|
pf->sriov_irq_size, 0, needed, 0);
|
|
|
|
/* conversion from number in bitmap to global irq index */
|
|
|
|
int index = pf->sriov_irq_size - res - needed;
|
|
|
|
|
|
|
|
if (res >= pf->sriov_irq_size || index < pf->sriov_base_vector)
|
|
|
|
return -ENOENT;
|
|
|
|
|
|
|
|
bitmap_set(pf->sriov_irq_bm, res, needed);
|
|
|
|
return index;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_sriov_free_irqs - free irqs used by the VF
|
|
|
|
* @pf: pointer to PF structure
|
|
|
|
* @vf: pointer to VF structure
|
|
|
|
*/
|
|
|
|
static void ice_sriov_free_irqs(struct ice_pf *pf, struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
/* Move back from first vector index to first index in bitmap */
|
|
|
|
int bm_i = pf->sriov_irq_size - vf->first_vector_idx - vf->num_msix;
|
|
|
|
|
|
|
|
bitmap_clear(pf->sriov_irq_bm, bm_i, vf->num_msix);
|
|
|
|
vf->first_vector_idx = 0;
|
|
|
|
}
|
|
|
|
|
2020-05-15 17:51:10 -07:00
|
|
|
/**
|
|
|
|
* ice_init_vf_vsi_res - initialize/setup VF VSI resources
|
|
|
|
* @vf: VF to initialize/setup the VSI for
|
|
|
|
*
|
|
|
|
* This function creates a VSI for the VF, adds a VLAN 0 filter, and sets up the
|
|
|
|
* VF VSI's broadcast filter and is only used during initial VF creation.
|
|
|
|
*/
|
|
|
|
static int ice_init_vf_vsi_res(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = vf->pf;
|
|
|
|
struct ice_vsi *vsi;
|
|
|
|
int err;
|
|
|
|
|
2023-10-19 10:32:23 -07:00
|
|
|
vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
|
|
|
|
if (vf->first_vector_idx < 0)
|
|
|
|
return -ENOMEM;
|
2020-05-15 17:51:10 -07:00
|
|
|
|
2020-05-15 17:51:16 -07:00
|
|
|
vsi = ice_vf_vsi_setup(vf);
|
|
|
|
if (!vsi)
|
2020-05-15 17:51:10 -07:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2023-01-18 17:16:49 -08:00
|
|
|
err = ice_vf_init_host_cfg(vf, vsi);
|
|
|
|
if (err)
|
2020-05-15 17:51:10 -07:00
|
|
|
goto release_vsi;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
release_vsi:
|
2020-05-15 17:51:16 -07:00
|
|
|
ice_vf_vsi_release(vf);
|
2020-05-15 17:51:10 -07:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_start_vfs - start VFs so they are ready to be used by SR-IOV
|
|
|
|
* @pf: PF the VFs are associated with
|
|
|
|
*/
|
|
|
|
static int ice_start_vfs(struct ice_pf *pf)
|
|
|
|
{
|
|
|
|
struct ice_hw *hw = &pf->hw;
|
2022-02-16 13:37:35 -08:00
|
|
|
unsigned int bkt, it_cnt;
|
|
|
|
struct ice_vf *vf;
|
|
|
|
int retval;
|
2020-05-15 17:51:10 -07:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
lockdep_assert_held(&pf->vfs.table_lock);
|
|
|
|
|
2022-02-16 13:37:35 -08:00
|
|
|
it_cnt = 0;
|
|
|
|
ice_for_each_vf(pf, bkt, vf) {
|
2022-02-22 16:27:01 -08:00
|
|
|
vf->vf_ops->clear_reset_trigger(vf);
|
2020-05-15 17:51:10 -07:00
|
|
|
|
|
|
|
retval = ice_init_vf_vsi_res(vf);
|
|
|
|
if (retval) {
|
|
|
|
dev_err(ice_pf_to_dev(pf), "Failed to initialize VSI resources for VF %d, error %d\n",
|
|
|
|
vf->vf_id, retval);
|
|
|
|
goto teardown;
|
|
|
|
}
|
|
|
|
|
2023-10-24 13:09:27 +02:00
|
|
|
retval = ice_eswitch_attach(pf, vf);
|
|
|
|
if (retval) {
|
|
|
|
dev_err(ice_pf_to_dev(pf), "Failed to attach VF %d to eswitch, error %d",
|
|
|
|
vf->vf_id, retval);
|
|
|
|
ice_vf_vsi_release(vf);
|
|
|
|
goto teardown;
|
|
|
|
}
|
|
|
|
|
2020-05-15 17:51:10 -07:00
|
|
|
set_bit(ICE_VF_STATE_INIT, vf->vf_states);
|
|
|
|
ice_ena_vf_mappings(vf);
|
|
|
|
wr32(hw, VFGEN_RSTAT(vf->vf_id), VIRTCHNL_VFR_VFACTIVE);
|
2022-02-16 13:37:35 -08:00
|
|
|
it_cnt++;
|
2020-05-15 17:51:10 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
ice_flush(hw);
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
teardown:
|
2022-02-16 13:37:35 -08:00
|
|
|
ice_for_each_vf(pf, bkt, vf) {
|
|
|
|
if (it_cnt == 0)
|
|
|
|
break;
|
2020-05-15 17:51:10 -07:00
|
|
|
|
|
|
|
ice_dis_vf_mappings(vf);
|
2020-05-15 17:51:16 -07:00
|
|
|
ice_vf_vsi_release(vf);
|
2022-02-16 13:37:35 -08:00
|
|
|
it_cnt--;
|
2020-05-15 17:51:10 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
2022-02-22 16:27:01 -08:00
|
|
|
/**
|
|
|
|
* ice_sriov_free_vf - Free VF memory after all references are dropped
|
|
|
|
* @vf: pointer to VF to free
|
|
|
|
*
|
|
|
|
* Called by ice_put_vf through ice_release_vf once the last reference to a VF
|
|
|
|
* structure has been dropped.
|
|
|
|
*/
|
|
|
|
static void ice_sriov_free_vf(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
mutex_destroy(&vf->cfg_lock);
|
|
|
|
|
|
|
|
kfree_rcu(vf, rcu);
|
|
|
|
}
|
|
|
|
|
ice: introduce clear_reset_state operation
When hardware is reset, the VF relies on the VFGEN_RSTAT register to detect
when the VF is finished resetting. This is a tri-state register where 0
indicates a reset is in progress, 1 indicates the hardware is done
resetting, and 2 indicates that the software is done resetting.
Currently the PF driver relies on the device hardware resetting VFGEN_RSTAT
when a global reset occurs. This works ok, but it does mean that the VF
might not immediately notice a reset when the driver first detects that the
global reset is occurring.
This is also problematic for Scalable IOV, because there is no read/write
equivalent VFGEN_RSTAT register for the Scalable VSI type. Instead, the
Scalable IOV VFs will need to emulate this register.
To support this, introduce a new VF operation, clear_reset_state, which is
called when the PF driver first detects a global reset. The Single Root IOV
implementation can just write to VFGEN_RSTAT to ensure it's cleared
immediately, without waiting for the actual hardware reset to begin. The
Scalable IOV implementation will use this as part of its tracking of the
reset status to allow properly reporting the emulated VFGEN_RSTAT to the VF
driver.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de>
Tested-by: Marek Szlosek <marek.szlosek@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-01-18 17:16:51 -08:00
|
|
|
/**
|
|
|
|
* ice_sriov_clear_reset_state - clears VF Reset status register
|
|
|
|
* @vf: the vf to configure
|
|
|
|
*/
|
|
|
|
static void ice_sriov_clear_reset_state(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
struct ice_hw *hw = &vf->pf->hw;
|
|
|
|
|
|
|
|
/* Clear the reset status register so that VF immediately sees that
|
|
|
|
* the device is resetting, even if hardware hasn't yet gotten around
|
|
|
|
* to clearing VFGEN_RSTAT for us.
|
|
|
|
*/
|
|
|
|
wr32(hw, VFGEN_RSTAT(vf->vf_id), VIRTCHNL_VFR_INPROGRESS);
|
|
|
|
}
|
|
|
|
|
2022-02-22 16:27:01 -08:00
|
|
|
/**
|
|
|
|
* ice_sriov_clear_mbx_register - clears SRIOV VF's mailbox registers
|
|
|
|
* @vf: the vf to configure
|
|
|
|
*/
|
|
|
|
static void ice_sriov_clear_mbx_register(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = vf->pf;
|
|
|
|
|
|
|
|
wr32(&pf->hw, VF_MBX_ARQLEN(vf->vf_id), 0);
|
|
|
|
wr32(&pf->hw, VF_MBX_ATQLEN(vf->vf_id), 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_sriov_trigger_reset_register - trigger VF reset for SRIOV VF
|
|
|
|
* @vf: pointer to VF structure
|
|
|
|
* @is_vflr: true if reset occurred due to VFLR
|
|
|
|
*
|
|
|
|
* Trigger and cleanup after a VF reset for a SR-IOV VF.
|
|
|
|
*/
|
|
|
|
static void ice_sriov_trigger_reset_register(struct ice_vf *vf, bool is_vflr)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = vf->pf;
|
|
|
|
u32 reg, reg_idx, bit_idx;
|
|
|
|
unsigned int vf_abs_id, i;
|
|
|
|
struct device *dev;
|
|
|
|
struct ice_hw *hw;
|
|
|
|
|
|
|
|
dev = ice_pf_to_dev(pf);
|
|
|
|
hw = &pf->hw;
|
|
|
|
vf_abs_id = vf->vf_id + hw->func_caps.vf_base_id;
|
|
|
|
|
|
|
|
/* In the case of a VFLR, HW has already reset the VF and we just need
|
|
|
|
* to clean up. Otherwise we must first trigger the reset using the
|
|
|
|
* VFRTRIG register.
|
|
|
|
*/
|
|
|
|
if (!is_vflr) {
|
|
|
|
reg = rd32(hw, VPGEN_VFRTRIG(vf->vf_id));
|
|
|
|
reg |= VPGEN_VFRTRIG_VFSWR_M;
|
|
|
|
wr32(hw, VPGEN_VFRTRIG(vf->vf_id), reg);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* clear the VFLR bit in GLGEN_VFLRSTAT */
|
|
|
|
reg_idx = (vf_abs_id) / 32;
|
|
|
|
bit_idx = (vf_abs_id) % 32;
|
|
|
|
wr32(hw, GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx));
|
|
|
|
ice_flush(hw);
|
|
|
|
|
|
|
|
wr32(hw, PF_PCI_CIAA,
|
|
|
|
VF_DEVICE_STATUS | (vf_abs_id << PF_PCI_CIAA_VF_NUM_S));
|
|
|
|
for (i = 0; i < ICE_PCI_CIAD_WAIT_COUNT; i++) {
|
|
|
|
reg = rd32(hw, PF_PCI_CIAD);
|
|
|
|
/* no transactions pending so stop polling */
|
|
|
|
if ((reg & VF_TRANS_PENDING_M) == 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
dev_err(dev, "VF %u PCI transactions stuck\n", vf->vf_id);
|
|
|
|
udelay(ICE_PCI_CIAD_WAIT_DELAY_US);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_sriov_poll_reset_status - poll SRIOV VF reset status
|
|
|
|
* @vf: pointer to VF structure
|
|
|
|
*
|
|
|
|
* Returns true when reset is successful, else returns false
|
|
|
|
*/
|
|
|
|
static bool ice_sriov_poll_reset_status(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = vf->pf;
|
|
|
|
unsigned int i;
|
|
|
|
u32 reg;
|
|
|
|
|
|
|
|
for (i = 0; i < 10; i++) {
|
|
|
|
/* VF reset requires driver to first reset the VF and then
|
|
|
|
* poll the status register to make sure that the reset
|
|
|
|
* completed successfully.
|
|
|
|
*/
|
|
|
|
reg = rd32(&pf->hw, VPGEN_VFRSTAT(vf->vf_id));
|
|
|
|
if (reg & VPGEN_VFRSTAT_VFRD_M)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
/* only sleep if the reset is not done */
|
|
|
|
usleep_range(10, 20);
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_sriov_clear_reset_trigger - enable VF to access hardware
|
|
|
|
* @vf: VF to enabled hardware access for
|
|
|
|
*/
|
|
|
|
static void ice_sriov_clear_reset_trigger(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
struct ice_hw *hw = &vf->pf->hw;
|
|
|
|
u32 reg;
|
|
|
|
|
|
|
|
reg = rd32(hw, VPGEN_VFRTRIG(vf->vf_id));
|
|
|
|
reg &= ~VPGEN_VFRTRIG_VFSWR_M;
|
|
|
|
wr32(hw, VPGEN_VFRTRIG(vf->vf_id), reg);
|
|
|
|
ice_flush(hw);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_sriov_post_vsi_rebuild - tasks to do after the VF's VSI have been rebuilt
|
|
|
|
* @vf: VF to perform tasks on
|
|
|
|
*/
|
|
|
|
static void ice_sriov_post_vsi_rebuild(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
ice_ena_vf_mappings(vf);
|
|
|
|
wr32(&vf->pf->hw, VFGEN_RSTAT(vf->vf_id), VIRTCHNL_VFR_VFACTIVE);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct ice_vf_ops ice_sriov_vf_ops = {
|
|
|
|
.reset_type = ICE_VF_RESET,
|
|
|
|
.free = ice_sriov_free_vf,
|
ice: introduce clear_reset_state operation
When hardware is reset, the VF relies on the VFGEN_RSTAT register to detect
when the VF is finished resetting. This is a tri-state register where 0
indicates a reset is in progress, 1 indicates the hardware is done
resetting, and 2 indicates that the software is done resetting.
Currently the PF driver relies on the device hardware resetting VFGEN_RSTAT
when a global reset occurs. This works ok, but it does mean that the VF
might not immediately notice a reset when the driver first detects that the
global reset is occurring.
This is also problematic for Scalable IOV, because there is no read/write
equivalent VFGEN_RSTAT register for the Scalable VSI type. Instead, the
Scalable IOV VFs will need to emulate this register.
To support this, introduce a new VF operation, clear_reset_state, which is
called when the PF driver first detects a global reset. The Single Root IOV
implementation can just write to VFGEN_RSTAT to ensure it's cleared
immediately, without waiting for the actual hardware reset to begin. The
Scalable IOV implementation will use this as part of its tracking of the
reset status to allow properly reporting the emulated VFGEN_RSTAT to the VF
driver.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de>
Tested-by: Marek Szlosek <marek.szlosek@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-01-18 17:16:51 -08:00
|
|
|
.clear_reset_state = ice_sriov_clear_reset_state,
|
2022-02-22 16:27:01 -08:00
|
|
|
.clear_mbx_register = ice_sriov_clear_mbx_register,
|
|
|
|
.trigger_reset_register = ice_sriov_trigger_reset_register,
|
|
|
|
.poll_reset_status = ice_sriov_poll_reset_status,
|
|
|
|
.clear_reset_trigger = ice_sriov_clear_reset_trigger,
|
2023-01-18 17:16:52 -08:00
|
|
|
.irq_close = NULL,
|
2022-02-22 16:27:01 -08:00
|
|
|
.post_vsi_rebuild = ice_sriov_post_vsi_rebuild,
|
|
|
|
};
|
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/**
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
* ice_create_vf_entries - Allocate and insert VF entries
|
|
|
|
* @pf: pointer to the PF structure
|
|
|
|
* @num_vfs: the number of VFs to allocate
|
|
|
|
*
|
|
|
|
* Allocate new VF entries and insert them into the hash table. Set some
|
|
|
|
* basic default fields for initializing the new VFs.
|
|
|
|
*
|
|
|
|
* After this function exits, the hash table will have num_vfs entries
|
|
|
|
* inserted.
|
|
|
|
*
|
|
|
|
* Returns 0 on success or an integer error code on failure.
|
2020-05-15 17:51:11 -07:00
|
|
|
*/
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs)
|
2020-05-15 17:51:11 -07:00
|
|
|
{
|
2023-10-19 10:32:19 -07:00
|
|
|
struct pci_dev *pdev = pf->pdev;
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
struct ice_vfs *vfs = &pf->vfs;
|
2023-10-19 10:32:19 -07:00
|
|
|
struct pci_dev *vfdev = NULL;
|
2022-02-16 13:37:35 -08:00
|
|
|
struct ice_vf *vf;
|
2023-10-19 10:32:19 -07:00
|
|
|
u16 vf_pdev_id;
|
|
|
|
int err, pos;
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
|
|
|
|
lockdep_assert_held(&vfs->table_lock);
|
|
|
|
|
2023-10-19 10:32:19 -07:00
|
|
|
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV);
|
|
|
|
pci_read_config_word(pdev, pos + PCI_SRIOV_VF_DID, &vf_pdev_id);
|
|
|
|
|
|
|
|
for (u16 vf_id = 0; vf_id < num_vfs; vf_id++) {
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
vf = kzalloc(sizeof(*vf), GFP_KERNEL);
|
|
|
|
if (!vf) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_free_entries;
|
|
|
|
}
|
|
|
|
kref_init(&vf->refcnt);
|
2020-05-15 17:51:11 -07:00
|
|
|
|
|
|
|
vf->pf = pf;
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
vf->vf_id = vf_id;
|
|
|
|
|
2022-02-22 16:27:01 -08:00
|
|
|
/* set sriov vf ops for VFs created during SRIOV flow */
|
|
|
|
vf->vf_ops = &ice_sriov_vf_ops;
|
|
|
|
|
2023-01-18 17:16:48 -08:00
|
|
|
ice_initialize_vf_entry(vf);
|
2021-08-19 17:08:51 -07:00
|
|
|
|
2023-10-19 10:32:19 -07:00
|
|
|
do {
|
|
|
|
vfdev = pci_get_device(pdev->vendor, vf_pdev_id, vfdev);
|
|
|
|
} while (vfdev && vfdev->physfn != pdev);
|
|
|
|
vf->vfdev = vfdev;
|
2023-01-18 17:16:48 -08:00
|
|
|
vf->vf_sw_id = pf->first_sw;
|
2020-05-15 17:51:11 -07:00
|
|
|
|
2023-10-19 10:32:19 -07:00
|
|
|
pci_dev_get(vfdev);
|
2023-10-19 10:32:20 -07:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
hash_add_rcu(vfs->table, &vf->entry, vf_id);
|
|
|
|
}
|
2020-05-15 17:51:11 -07:00
|
|
|
|
2023-10-19 10:32:19 -07:00
|
|
|
/* Decrement of refcount done by pci_get_device() inside the loop does
|
|
|
|
* not touch the last iteration's vfdev, so it has to be done manually
|
|
|
|
* to balance pci_dev_get() added within the loop.
|
|
|
|
*/
|
|
|
|
pci_dev_put(vfdev);
|
|
|
|
|
2020-05-15 17:51:11 -07:00
|
|
|
return 0;
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
|
|
|
|
err_free_entries:
|
|
|
|
ice_free_vf_entries(pf);
|
|
|
|
return err;
|
2020-05-15 17:51:11 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_ena_vfs - enable VFs so they are ready to be used
|
2018-09-19 17:42:55 -07:00
|
|
|
* @pf: pointer to the PF structure
|
2020-05-15 17:51:11 -07:00
|
|
|
* @num_vfs: number of VFs to enable
|
2018-09-19 17:42:55 -07:00
|
|
|
*/
|
2020-05-15 17:51:11 -07:00
|
|
|
static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
|
2018-09-19 17:42:55 -07:00
|
|
|
{
|
2023-10-19 10:32:21 -07:00
|
|
|
int total_vectors = pf->hw.func_caps.common_cap.num_msix_vectors;
|
2019-11-08 06:23:26 -08:00
|
|
|
struct device *dev = ice_pf_to_dev(pf);
|
2018-09-19 17:42:55 -07:00
|
|
|
struct ice_hw *hw = &pf->hw;
|
2020-05-15 17:51:11 -07:00
|
|
|
int ret;
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2023-10-19 10:32:21 -07:00
|
|
|
pf->sriov_irq_bm = bitmap_zalloc(total_vectors, GFP_KERNEL);
|
|
|
|
if (!pf->sriov_irq_bm)
|
|
|
|
return -ENOMEM;
|
|
|
|
pf->sriov_irq_size = total_vectors;
|
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/* Disable global interrupt 0 so we don't try to handle the VFLR. */
|
2023-05-15 21:03:17 +02:00
|
|
|
wr32(hw, GLINT_DYN_CTL(pf->oicr_irq.index),
|
2018-09-19 17:42:55 -07:00
|
|
|
ICE_ITR_NONE << GLINT_DYN_CTL_ITR_INDX_S);
|
2021-03-02 10:15:38 -08:00
|
|
|
set_bit(ICE_OICR_INTR_DIS, pf->state);
|
2018-09-19 17:42:55 -07:00
|
|
|
ice_flush(hw);
|
|
|
|
|
2020-05-15 17:51:11 -07:00
|
|
|
ret = pci_enable_sriov(pf->pdev, num_vfs);
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
if (ret)
|
2018-09-19 17:42:55 -07:00
|
|
|
goto err_unroll_intr;
|
2020-05-15 17:51:11 -07:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
mutex_lock(&pf->vfs.table_lock);
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2022-02-22 16:26:56 -08:00
|
|
|
ret = ice_set_per_vf_res(pf, num_vfs);
|
|
|
|
if (ret) {
|
|
|
|
dev_err(dev, "Not enough resources for %d VFs, err %d. Try with fewer number of VFs\n",
|
|
|
|
num_vfs, ret);
|
2020-05-15 17:51:10 -07:00
|
|
|
goto err_unroll_sriov;
|
|
|
|
}
|
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
ret = ice_create_vf_entries(pf, num_vfs);
|
|
|
|
if (ret) {
|
|
|
|
dev_err(dev, "Failed to allocate VF entries for %d VFs\n",
|
|
|
|
num_vfs);
|
|
|
|
goto err_unroll_sriov;
|
|
|
|
}
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2022-02-22 16:26:56 -08:00
|
|
|
ret = ice_start_vfs(pf);
|
|
|
|
if (ret) {
|
|
|
|
dev_err(dev, "Failed to start %d VFs, err %d\n", num_vfs, ret);
|
2020-05-15 17:51:10 -07:00
|
|
|
ret = -EAGAIN;
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
goto err_unroll_vf_entries;
|
2019-04-16 10:24:32 -07:00
|
|
|
}
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2021-03-02 10:15:38 -08:00
|
|
|
clear_bit(ICE_VF_DIS, pf->state);
|
2021-08-19 17:08:56 -07:00
|
|
|
|
2021-07-12 07:54:25 -04:00
|
|
|
/* rearm global interrupts */
|
|
|
|
if (test_and_clear_bit(ICE_OICR_INTR_DIS, pf->state))
|
|
|
|
ice_irq_dynamic_ena(hw, NULL, NULL);
|
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
mutex_unlock(&pf->vfs.table_lock);
|
|
|
|
|
2020-05-15 17:51:10 -07:00
|
|
|
return 0;
|
2018-09-19 17:42:55 -07:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
err_unroll_vf_entries:
|
|
|
|
ice_free_vf_entries(pf);
|
2018-09-19 17:42:55 -07:00
|
|
|
err_unroll_sriov:
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
mutex_unlock(&pf->vfs.table_lock);
|
2018-09-19 17:42:55 -07:00
|
|
|
pci_disable_sriov(pf->pdev);
|
|
|
|
err_unroll_intr:
|
|
|
|
/* rearm interrupts here */
|
|
|
|
ice_irq_dynamic_ena(hw, NULL, NULL);
|
2021-03-02 10:15:38 -08:00
|
|
|
clear_bit(ICE_OICR_INTR_DIS, pf->state);
|
2023-10-19 10:32:21 -07:00
|
|
|
bitmap_free(pf->sriov_irq_bm);
|
2018-09-19 17:42:55 -07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_pci_sriov_ena - Enable or change number of VFs
|
|
|
|
* @pf: pointer to the PF structure
|
|
|
|
* @num_vfs: number of VFs to allocate
|
2020-05-15 17:51:08 -07:00
|
|
|
*
|
|
|
|
* Returns 0 on success and negative on failure
|
2018-09-19 17:42:55 -07:00
|
|
|
*/
|
|
|
|
static int ice_pci_sriov_ena(struct ice_pf *pf, int num_vfs)
|
|
|
|
{
|
2019-11-08 06:23:26 -08:00
|
|
|
struct device *dev = ice_pf_to_dev(pf);
|
2018-09-19 17:42:55 -07:00
|
|
|
int err;
|
|
|
|
|
2023-05-31 14:36:42 +02:00
|
|
|
if (!num_vfs) {
|
2018-09-19 17:42:55 -07:00
|
|
|
ice_free_vfs(pf);
|
2020-05-15 17:51:08 -07:00
|
|
|
return 0;
|
2023-05-31 14:36:42 +02:00
|
|
|
}
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2022-02-16 13:37:36 -08:00
|
|
|
if (num_vfs > pf->vfs.num_supported) {
|
2018-09-19 17:42:55 -07:00
|
|
|
dev_err(dev, "Can't enable %d VFs, max VFs supported is %d\n",
|
2022-02-16 13:37:36 -08:00
|
|
|
num_vfs, pf->vfs.num_supported);
|
2020-02-27 10:15:03 -08:00
|
|
|
return -EOPNOTSUPP;
|
2018-09-19 17:42:55 -07:00
|
|
|
}
|
|
|
|
|
2020-05-15 17:51:11 -07:00
|
|
|
dev_info(dev, "Enabling %d VFs\n", num_vfs);
|
|
|
|
err = ice_ena_vfs(pf, num_vfs);
|
2018-09-19 17:42:55 -07:00
|
|
|
if (err) {
|
|
|
|
dev_err(dev, "Failed to enable SR-IOV: %d\n", err);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
set_bit(ICE_FLAG_SRIOV_ENA, pf->flags);
|
2020-05-15 17:51:08 -07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_check_sriov_allowed - check if SR-IOV is allowed based on various checks
|
|
|
|
* @pf: PF to enabled SR-IOV on
|
|
|
|
*/
|
|
|
|
static int ice_check_sriov_allowed(struct ice_pf *pf)
|
|
|
|
{
|
|
|
|
struct device *dev = ice_pf_to_dev(pf);
|
|
|
|
|
|
|
|
if (!test_bit(ICE_FLAG_SRIOV_CAPABLE, pf->flags)) {
|
|
|
|
dev_err(dev, "This device is not capable of SR-IOV\n");
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ice_is_safe_mode(pf)) {
|
|
|
|
dev_err(dev, "SR-IOV cannot be configured - Device is in Safe Mode\n");
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ice_pf_state_is_nominal(pf)) {
|
|
|
|
dev_err(dev, "Cannot enable SR-IOV, device not ready\n");
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
2018-09-19 17:42:55 -07:00
|
|
|
}
|
|
|
|
|
2023-10-19 10:32:22 -07:00
|
|
|
/**
|
|
|
|
* ice_sriov_get_vf_total_msix - return number of MSI-X used by VFs
|
|
|
|
* @pdev: pointer to pci_dev struct
|
|
|
|
*
|
|
|
|
* The function is called via sysfs ops
|
|
|
|
*/
|
|
|
|
u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = pci_get_drvdata(pdev);
|
|
|
|
|
|
|
|
return pf->sriov_irq_size - ice_get_max_used_msix_vector(pf);
|
|
|
|
}
|
|
|
|
|
2023-10-19 10:32:23 -07:00
|
|
|
static int ice_sriov_move_base_vector(struct ice_pf *pf, int move)
|
|
|
|
{
|
|
|
|
if (pf->sriov_base_vector - move < ice_get_max_used_msix_vector(pf))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
pf->sriov_base_vector -= move;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ice_sriov_remap_vectors(struct ice_pf *pf, u16 restricted_id)
|
|
|
|
{
|
|
|
|
u16 vf_ids[ICE_MAX_SRIOV_VFS];
|
|
|
|
struct ice_vf *tmp_vf;
|
|
|
|
int to_remap = 0, bkt;
|
|
|
|
|
|
|
|
/* For better irqs usage try to remap irqs of VFs
|
|
|
|
* that aren't running yet
|
|
|
|
*/
|
|
|
|
ice_for_each_vf(pf, bkt, tmp_vf) {
|
|
|
|
/* skip VF which is changing the number of MSI-X */
|
|
|
|
if (restricted_id == tmp_vf->vf_id ||
|
|
|
|
test_bit(ICE_VF_STATE_ACTIVE, tmp_vf->vf_states))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
ice_dis_vf_mappings(tmp_vf);
|
|
|
|
ice_sriov_free_irqs(pf, tmp_vf);
|
|
|
|
|
|
|
|
vf_ids[to_remap] = tmp_vf->vf_id;
|
|
|
|
to_remap += 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (int i = 0; i < to_remap; i++) {
|
|
|
|
tmp_vf = ice_get_vf_by_id(pf, vf_ids[i]);
|
|
|
|
if (!tmp_vf)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
tmp_vf->first_vector_idx =
|
|
|
|
ice_sriov_get_irqs(pf, tmp_vf->num_msix);
|
|
|
|
/* there is no need to rebuild VSI as we are only changing the
|
|
|
|
* vector indexes not amount of MSI-X or queues
|
|
|
|
*/
|
|
|
|
ice_ena_vf_mappings(tmp_vf);
|
|
|
|
ice_put_vf(tmp_vf);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-10-19 10:32:22 -07:00
|
|
|
/**
|
|
|
|
* ice_sriov_set_msix_vec_count
|
|
|
|
* @vf_dev: pointer to pci_dev struct of VF device
|
|
|
|
* @msix_vec_count: new value for MSI-X amount on this VF
|
|
|
|
*
|
|
|
|
* Set requested MSI-X, queues and registers for @vf_dev.
|
|
|
|
*
|
|
|
|
* First do some sanity checks like if there are any VFs, if the new value
|
|
|
|
* is correct etc. Then disable old mapping (MSI-X and queues registers), change
|
|
|
|
* MSI-X and queues, rebuild VSI and enable new mapping.
|
|
|
|
*
|
|
|
|
* If it is possible (driver not binded to VF) try to remap also other VFs to
|
|
|
|
* linearize irqs register usage.
|
|
|
|
*/
|
|
|
|
int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
|
|
|
|
{
|
|
|
|
struct pci_dev *pdev = pci_physfn(vf_dev);
|
|
|
|
struct ice_pf *pf = pci_get_drvdata(pdev);
|
2023-10-19 10:32:23 -07:00
|
|
|
u16 prev_msix, prev_queues, queues;
|
|
|
|
bool needs_rebuild = false;
|
2024-02-23 07:40:24 +01:00
|
|
|
struct ice_vsi *vsi;
|
2023-10-19 10:32:22 -07:00
|
|
|
struct ice_vf *vf;
|
|
|
|
int id;
|
|
|
|
|
|
|
|
if (!ice_get_num_vfs(pf))
|
|
|
|
return -ENOENT;
|
|
|
|
|
|
|
|
if (!msix_vec_count)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
queues = msix_vec_count;
|
|
|
|
/* add 1 MSI-X for OICR */
|
|
|
|
msix_vec_count += 1;
|
|
|
|
|
2023-10-19 10:32:23 -07:00
|
|
|
if (queues > min(ice_get_avail_txq_count(pf),
|
|
|
|
ice_get_avail_rxq_count(pf)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (msix_vec_count < ICE_MIN_INTR_PER_VF)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2023-10-19 10:32:22 -07:00
|
|
|
/* Transition of PCI VF function number to function_id */
|
|
|
|
for (id = 0; id < pci_num_vf(pdev); id++) {
|
|
|
|
if (vf_dev->devfn == pci_iov_virtfn_devfn(pdev, id))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (id == pci_num_vf(pdev))
|
|
|
|
return -ENOENT;
|
|
|
|
|
|
|
|
vf = ice_get_vf_by_id(pf, id);
|
|
|
|
|
|
|
|
if (!vf)
|
|
|
|
return -ENOENT;
|
|
|
|
|
2024-02-23 07:40:24 +01:00
|
|
|
vsi = ice_get_vf_vsi(vf);
|
|
|
|
if (!vsi)
|
|
|
|
return -ENOENT;
|
|
|
|
|
2023-10-19 10:32:23 -07:00
|
|
|
prev_msix = vf->num_msix;
|
|
|
|
prev_queues = vf->num_vf_qs;
|
|
|
|
|
|
|
|
if (ice_sriov_move_base_vector(pf, msix_vec_count - prev_msix)) {
|
|
|
|
ice_put_vf(vf);
|
|
|
|
return -ENOSPC;
|
|
|
|
}
|
|
|
|
|
2023-10-19 10:32:22 -07:00
|
|
|
ice_dis_vf_mappings(vf);
|
2023-10-19 10:32:23 -07:00
|
|
|
ice_sriov_free_irqs(pf, vf);
|
|
|
|
|
|
|
|
/* Remap all VFs beside the one is now configured */
|
|
|
|
ice_sriov_remap_vectors(pf, vf->vf_id);
|
|
|
|
|
2023-10-19 10:32:22 -07:00
|
|
|
vf->num_msix = msix_vec_count;
|
|
|
|
vf->num_vf_qs = queues;
|
2023-10-19 10:32:23 -07:00
|
|
|
vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
|
|
|
|
if (vf->first_vector_idx < 0)
|
|
|
|
goto unroll;
|
|
|
|
|
2024-02-23 07:40:24 +01:00
|
|
|
if (ice_vf_reconfig_vsi(vf) || ice_vf_init_host_cfg(vf, vsi)) {
|
2023-10-19 10:32:23 -07:00
|
|
|
/* Try to rebuild with previous values */
|
|
|
|
needs_rebuild = true;
|
|
|
|
goto unroll;
|
|
|
|
}
|
|
|
|
|
|
|
|
dev_info(ice_pf_to_dev(pf),
|
|
|
|
"Changing VF %d resources to %d vectors and %d queues\n",
|
|
|
|
vf->vf_id, vf->num_msix, vf->num_vf_qs);
|
|
|
|
|
2023-10-19 10:32:22 -07:00
|
|
|
ice_ena_vf_mappings(vf);
|
|
|
|
ice_put_vf(vf);
|
|
|
|
|
|
|
|
return 0;
|
2023-10-19 10:32:23 -07:00
|
|
|
|
|
|
|
unroll:
|
|
|
|
dev_info(ice_pf_to_dev(pf),
|
|
|
|
"Can't set %d vectors on VF %d, falling back to %d\n",
|
|
|
|
vf->num_msix, vf->vf_id, prev_msix);
|
|
|
|
|
|
|
|
vf->num_msix = prev_msix;
|
|
|
|
vf->num_vf_qs = prev_queues;
|
|
|
|
vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
|
|
|
|
if (vf->first_vector_idx < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2024-02-23 07:40:24 +01:00
|
|
|
if (needs_rebuild) {
|
2023-11-28 11:42:15 -08:00
|
|
|
ice_vf_reconfig_vsi(vf);
|
2024-02-23 07:40:24 +01:00
|
|
|
ice_vf_init_host_cfg(vf, vsi);
|
|
|
|
}
|
2023-10-19 10:32:23 -07:00
|
|
|
|
|
|
|
ice_ena_vf_mappings(vf);
|
|
|
|
ice_put_vf(vf);
|
|
|
|
|
|
|
|
return -EINVAL;
|
2023-10-19 10:32:22 -07:00
|
|
|
}
|
|
|
|
|
2018-09-19 17:42:55 -07:00
|
|
|
/**
|
|
|
|
* ice_sriov_configure - Enable or change number of VFs via sysfs
|
|
|
|
* @pdev: pointer to a pci_dev structure
|
2020-05-15 17:51:08 -07:00
|
|
|
* @num_vfs: number of VFs to allocate or 0 to free VFs
|
2018-09-19 17:42:55 -07:00
|
|
|
*
|
2020-05-15 17:51:08 -07:00
|
|
|
* This function is called when the user updates the number of VFs in sysfs. On
|
|
|
|
* success return whatever num_vfs was set to by the caller. Return negative on
|
|
|
|
* failure.
|
2018-09-19 17:42:55 -07:00
|
|
|
*/
|
|
|
|
int ice_sriov_configure(struct pci_dev *pdev, int num_vfs)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = pci_get_drvdata(pdev);
|
2019-11-08 06:23:26 -08:00
|
|
|
struct device *dev = ice_pf_to_dev(pf);
|
2020-05-15 17:51:08 -07:00
|
|
|
int err;
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2020-05-15 17:51:08 -07:00
|
|
|
err = ice_check_sriov_allowed(pf);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2019-09-09 06:47:46 -07:00
|
|
|
|
2020-05-15 17:51:08 -07:00
|
|
|
if (!num_vfs) {
|
|
|
|
if (!pci_vfs_assigned(pdev)) {
|
|
|
|
ice_free_vfs(pf);
|
|
|
|
return 0;
|
|
|
|
}
|
2018-09-19 17:42:55 -07:00
|
|
|
|
2019-11-08 06:23:26 -08:00
|
|
|
dev_err(dev, "can't free VFs because some are assigned to VMs.\n");
|
2018-09-19 17:42:55 -07:00
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
2020-05-15 17:51:08 -07:00
|
|
|
err = ice_pci_sriov_ena(pf, num_vfs);
|
2023-02-22 09:09:11 -08:00
|
|
|
if (err)
|
2020-05-15 17:51:08 -07:00
|
|
|
return err;
|
|
|
|
|
|
|
|
return num_vfs;
|
2018-09-19 17:42:55 -07:00
|
|
|
}
|
2018-09-19 17:42:57 -07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_process_vflr_event - Free VF resources via IRQ calls
|
|
|
|
* @pf: pointer to the PF structure
|
|
|
|
*
|
2018-10-26 11:44:46 -07:00
|
|
|
* called from the VFLR IRQ handler to
|
2018-09-19 17:42:57 -07:00
|
|
|
* free up VF resources and state variables
|
|
|
|
*/
|
|
|
|
void ice_process_vflr_event(struct ice_pf *pf)
|
|
|
|
{
|
|
|
|
struct ice_hw *hw = &pf->hw;
|
2022-02-16 13:37:35 -08:00
|
|
|
struct ice_vf *vf;
|
|
|
|
unsigned int bkt;
|
2018-09-19 17:42:57 -07:00
|
|
|
u32 reg;
|
|
|
|
|
2021-03-02 10:15:38 -08:00
|
|
|
if (!test_and_clear_bit(ICE_VFLR_EVENT_PENDING, pf->state) ||
|
2022-02-16 13:37:37 -08:00
|
|
|
!ice_has_vfs(pf))
|
2018-09-19 17:42:57 -07:00
|
|
|
return;
|
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
mutex_lock(&pf->vfs.table_lock);
|
2022-02-16 13:37:35 -08:00
|
|
|
ice_for_each_vf(pf, bkt, vf) {
|
2018-09-19 17:42:57 -07:00
|
|
|
u32 reg_idx, bit_idx;
|
|
|
|
|
2022-02-16 13:37:35 -08:00
|
|
|
reg_idx = (hw->func_caps.vf_base_id + vf->vf_id) / 32;
|
|
|
|
bit_idx = (hw->func_caps.vf_base_id + vf->vf_id) % 32;
|
2018-09-19 17:42:57 -07:00
|
|
|
/* read GLGEN_VFLRSTAT register to find out the flr VFs */
|
|
|
|
reg = rd32(hw, GLGEN_VFLRSTAT(reg_idx));
|
2022-02-22 16:27:09 -08:00
|
|
|
if (reg & BIT(bit_idx))
|
2018-09-19 17:42:57 -07:00
|
|
|
/* GLGEN_VFLRSTAT bit will be cleared in ice_reset_vf */
|
2022-02-22 16:27:09 -08:00
|
|
|
ice_reset_vf(vf, ICE_VF_RESET_VFLR | ICE_VF_RESET_LOCK);
|
2018-09-19 17:42:57 -07:00
|
|
|
}
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
mutex_unlock(&pf->vfs.table_lock);
|
2018-09-19 17:42:57 -07:00
|
|
|
}
|
2018-09-19 17:42:58 -07:00
|
|
|
|
2020-01-22 07:21:31 -08:00
|
|
|
/**
|
|
|
|
* ice_get_vf_from_pfq - get the VF who owns the PF space queue passed in
|
|
|
|
* @pf: PF used to index all VFs
|
|
|
|
* @pfq: queue index relative to the PF's function space
|
|
|
|
*
|
|
|
|
* If no VF is found who owns the pfq then return NULL, otherwise return a
|
|
|
|
* pointer to the VF who owns the pfq
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
*
|
|
|
|
* If this function returns non-NULL, it acquires a reference count of the VF
|
|
|
|
* structure. The caller is responsible for calling ice_put_vf() to drop this
|
|
|
|
* reference.
|
2020-01-22 07:21:31 -08:00
|
|
|
*/
|
|
|
|
static struct ice_vf *ice_get_vf_from_pfq(struct ice_pf *pf, u16 pfq)
|
|
|
|
{
|
2022-02-16 13:37:35 -08:00
|
|
|
struct ice_vf *vf;
|
|
|
|
unsigned int bkt;
|
2020-01-22 07:21:31 -08:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
rcu_read_lock();
|
|
|
|
ice_for_each_vf_rcu(pf, bkt, vf) {
|
2020-01-22 07:21:31 -08:00
|
|
|
struct ice_vsi *vsi;
|
|
|
|
u16 rxq_idx;
|
|
|
|
|
2021-03-02 10:15:39 -08:00
|
|
|
vsi = ice_get_vf_vsi(vf);
|
2022-04-11 16:29:03 -07:00
|
|
|
if (!vsi)
|
|
|
|
continue;
|
2020-01-22 07:21:31 -08:00
|
|
|
|
|
|
|
ice_for_each_rxq(vsi, rxq_idx)
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
if (vsi->rxq_map[rxq_idx] == pfq) {
|
|
|
|
struct ice_vf *found;
|
|
|
|
|
|
|
|
if (kref_get_unless_zero(&vf->refcnt))
|
|
|
|
found = vf;
|
|
|
|
else
|
|
|
|
found = NULL;
|
|
|
|
rcu_read_unlock();
|
|
|
|
return found;
|
|
|
|
}
|
2020-01-22 07:21:31 -08:00
|
|
|
}
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
rcu_read_unlock();
|
2020-01-22 07:21:31 -08:00
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_globalq_to_pfq - convert from global queue index to PF space queue index
|
|
|
|
* @pf: PF used for conversion
|
|
|
|
* @globalq: global queue index used to convert to PF space queue index
|
|
|
|
*/
|
|
|
|
static u32 ice_globalq_to_pfq(struct ice_pf *pf, u32 globalq)
|
|
|
|
{
|
|
|
|
return globalq - pf->hw.func_caps.common_cap.rxq_first_id;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_vf_lan_overflow_event - handle LAN overflow event for a VF
|
|
|
|
* @pf: PF that the LAN overflow event happened on
|
|
|
|
* @event: structure holding the event information for the LAN overflow event
|
|
|
|
*
|
|
|
|
* Determine if the LAN overflow event was caused by a VF queue. If it was not
|
|
|
|
* caused by a VF, do nothing. If a VF caused this LAN overflow event trigger a
|
|
|
|
* reset on the offending VF.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ice_vf_lan_overflow_event(struct ice_pf *pf, struct ice_rq_event_info *event)
|
|
|
|
{
|
|
|
|
u32 gldcb_rtctq, queue;
|
|
|
|
struct ice_vf *vf;
|
|
|
|
|
|
|
|
gldcb_rtctq = le32_to_cpu(event->desc.params.lan_overflow.prtdcb_ruptq);
|
|
|
|
dev_dbg(ice_pf_to_dev(pf), "GLDCB_RTCTQ: 0x%08x\n", gldcb_rtctq);
|
|
|
|
|
|
|
|
/* event returns device global Rx queue number */
|
2023-12-05 17:01:12 -08:00
|
|
|
queue = FIELD_GET(GLDCB_RTCTQ_RXQNUM_M, gldcb_rtctq);
|
2020-01-22 07:21:31 -08:00
|
|
|
|
|
|
|
vf = ice_get_vf_from_pfq(pf, ice_globalq_to_pfq(pf, queue));
|
|
|
|
if (!vf)
|
|
|
|
return;
|
|
|
|
|
2022-02-22 16:27:09 -08:00
|
|
|
ice_reset_vf(vf, ICE_VF_RESET_NOTIFY | ICE_VF_RESET_LOCK);
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
ice_put_vf(vf);
|
2020-01-22 07:21:31 -08:00
|
|
|
}
|
|
|
|
|
2018-09-19 17:42:59 -07:00
|
|
|
/**
|
2022-02-22 16:27:11 -08:00
|
|
|
* ice_set_vf_spoofchk
|
|
|
|
* @netdev: network interface device structure
|
|
|
|
* @vf_id: VF identifier
|
|
|
|
* @ena: flag to enable or disable feature
|
2018-09-19 17:42:59 -07:00
|
|
|
*
|
2022-02-22 16:27:11 -08:00
|
|
|
* Enable or disable VF spoof checking
|
2018-09-19 17:42:59 -07:00
|
|
|
*/
|
2022-02-22 16:27:11 -08:00
|
|
|
int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena)
|
2018-09-19 17:42:59 -07:00
|
|
|
{
|
2022-02-22 16:27:11 -08:00
|
|
|
struct ice_netdev_priv *np = netdev_priv(netdev);
|
|
|
|
struct ice_pf *pf = np->vsi->back;
|
|
|
|
struct ice_vsi *vf_vsi;
|
2019-11-08 06:23:26 -08:00
|
|
|
struct device *dev;
|
2022-02-22 16:27:11 -08:00
|
|
|
struct ice_vf *vf;
|
2018-09-19 17:42:59 -07:00
|
|
|
int ret;
|
|
|
|
|
2022-02-22 16:27:11 -08:00
|
|
|
dev = ice_pf_to_dev(pf);
|
2018-09-19 17:42:59 -07:00
|
|
|
|
2022-02-22 16:27:11 -08:00
|
|
|
vf = ice_get_vf_by_id(pf, vf_id);
|
|
|
|
if (!vf)
|
|
|
|
return -EINVAL;
|
2018-09-19 17:42:59 -07:00
|
|
|
|
2023-08-11 10:07:01 +02:00
|
|
|
ret = ice_check_vf_ready_for_cfg(vf);
|
2022-02-22 16:27:11 -08:00
|
|
|
if (ret)
|
|
|
|
goto out_put_vf;
|
2018-09-19 17:42:59 -07:00
|
|
|
|
2022-02-22 16:27:11 -08:00
|
|
|
vf_vsi = ice_get_vf_vsi(vf);
|
|
|
|
if (!vf_vsi) {
|
|
|
|
netdev_err(netdev, "VSI %d for VF %d is null\n",
|
|
|
|
vf->lan_vsi_idx, vf->vf_id);
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out_put_vf;
|
2019-02-26 16:35:19 -08:00
|
|
|
}
|
|
|
|
|
2022-02-22 16:27:11 -08:00
|
|
|
if (vf_vsi->type != ICE_VSI_VF) {
|
|
|
|
netdev_err(netdev, "Type %d of VSI %d for VF %d is no ICE_VSI_VF\n",
|
|
|
|
vf_vsi->type, vf_vsi->vsi_num, vf->vf_id);
|
|
|
|
ret = -ENODEV;
|
|
|
|
goto out_put_vf;
|
ice: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2
Add support for the VF driver to be able to request
VIRTCHNL_VF_OFFLOAD_VLAN_V2, negotiate its VLAN capabilities via
VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, add/delete VLAN filters, and
enable/disable VLAN offloads.
VFs supporting VIRTCHNL_OFFLOAD_VLAN_V2 will be able to use the
following virtchnl opcodes:
VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS
VIRTCHNL_OP_ADD_VLAN_V2
VIRTCHNL_OP_DEL_VLAN_V2
VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2
VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2
VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2
VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2
Legacy VF drivers may expect the initial VLAN stripping settings to be
configured by the PF, so the PF initializes VLAN stripping based on the
VIRTCHNL_OP_GET_VF_RESOURCES opcode. However, with VLAN support via
VIRTCHNL_VF_OFFLOAD_VLAN_V2, this function is only expected to be used
for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN, which will only
be supported when a port VLAN is configured. Update the function
based on the new expectations. Also, change the message when the PF
can't enable/disable VLAN stripping to a dev_dbg() as this isn't fatal.
When a VF isn't in a port VLAN and it only supports
VIRTCHNL_VF_OFFLOAD_VLAN when Double VLAN Mode (DVM) is enabled, then
the PF needs to reject the VIRTCHNL_VF_OFFLOAD_VLAN capability and
configure the VF in software only VLAN mode. To do this add the new
function ice_vf_vsi_cfg_legacy_vlan_mode(), which updates the VF's
inner and outer ice_vsi_vlan_ops functions and sets up software only
VLAN mode.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-02 08:38:48 -08:00
|
|
|
}
|
2018-09-19 17:42:59 -07:00
|
|
|
|
2022-02-22 16:27:11 -08:00
|
|
|
if (ena == vf->spoofchk) {
|
|
|
|
dev_dbg(dev, "VF spoofchk already %s\n", ena ? "ON" : "OFF");
|
|
|
|
ret = 0;
|
|
|
|
goto out_put_vf;
|
2018-09-19 17:42:59 -07:00
|
|
|
}
|
|
|
|
|
2022-02-22 16:27:11 -08:00
|
|
|
ret = ice_vsi_apply_spoofchk(vf_vsi, ena);
|
|
|
|
if (ret)
|
|
|
|
dev_err(dev, "Failed to set spoofchk %s for VF %d VSI %d\n error %d\n",
|
|
|
|
ena ? "ON" : "OFF", vf->vf_id, vf_vsi->vsi_num, ret);
|
|
|
|
else
|
|
|
|
vf->spoofchk = ena;
|
2018-09-19 17:42:59 -07:00
|
|
|
|
2022-02-22 16:27:11 -08:00
|
|
|
out_put_vf:
|
|
|
|
ice_put_vf(vf);
|
2018-09-19 17:42:59 -07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-09-19 17:42:58 -07:00
|
|
|
/**
|
|
|
|
* ice_get_vf_cfg
|
|
|
|
* @netdev: network interface device structure
|
|
|
|
* @vf_id: VF identifier
|
|
|
|
* @ivi: VF configuration structure
|
|
|
|
*
|
|
|
|
* return VF configuration
|
|
|
|
*/
|
2019-02-26 16:35:11 -08:00
|
|
|
int
|
|
|
|
ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi)
|
2018-09-19 17:42:58 -07:00
|
|
|
{
|
2019-11-08 06:23:27 -08:00
|
|
|
struct ice_pf *pf = ice_netdev_to_pf(netdev);
|
2018-09-19 17:42:58 -07:00
|
|
|
struct ice_vf *vf;
|
2022-02-16 13:37:37 -08:00
|
|
|
int ret;
|
2018-09-19 17:42:58 -07:00
|
|
|
|
2022-02-16 13:37:37 -08:00
|
|
|
vf = ice_get_vf_by_id(pf, vf_id);
|
|
|
|
if (!vf)
|
2018-09-19 17:42:58 -07:00
|
|
|
return -EINVAL;
|
|
|
|
|
2022-02-16 13:37:37 -08:00
|
|
|
ret = ice_check_vf_ready_for_cfg(vf);
|
|
|
|
if (ret)
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
goto out_put_vf;
|
2018-09-19 17:42:58 -07:00
|
|
|
|
|
|
|
ivi->vf = vf_id;
|
2023-01-18 17:16:53 -08:00
|
|
|
ether_addr_copy(ivi->mac, vf->hw_lan_addr);
|
2018-09-19 17:42:58 -07:00
|
|
|
|
|
|
|
/* VF configuration for VLAN and applicable QoS */
|
2021-12-02 08:38:43 -08:00
|
|
|
ivi->vlan = ice_vf_get_port_vlan_id(vf);
|
|
|
|
ivi->qos = ice_vf_get_port_vlan_prio(vf);
|
2021-12-02 08:38:51 -08:00
|
|
|
if (ice_vf_is_port_vlan_ena(vf))
|
|
|
|
ivi->vlan_proto = cpu_to_be16(ice_vf_get_port_vlan_tpid(vf));
|
2018-09-19 17:42:58 -07:00
|
|
|
|
|
|
|
ivi->trusted = vf->trusted;
|
|
|
|
ivi->spoofchk = vf->spoofchk;
|
|
|
|
if (!vf->link_forced)
|
|
|
|
ivi->linkstate = IFLA_VF_LINK_STATE_AUTO;
|
|
|
|
else if (vf->link_up)
|
|
|
|
ivi->linkstate = IFLA_VF_LINK_STATE_ENABLE;
|
|
|
|
else
|
|
|
|
ivi->linkstate = IFLA_VF_LINK_STATE_DISABLE;
|
2021-09-13 11:22:19 -07:00
|
|
|
ivi->max_tx_rate = vf->max_tx_rate;
|
|
|
|
ivi->min_tx_rate = vf->min_tx_rate;
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
|
|
|
|
out_put_vf:
|
|
|
|
ice_put_vf(vf);
|
|
|
|
return ret;
|
2018-09-19 17:42:58 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_set_vf_mac
|
|
|
|
* @netdev: network interface device structure
|
|
|
|
* @vf_id: VF identifier
|
2019-02-19 15:04:13 -08:00
|
|
|
* @mac: MAC address
|
2018-09-19 17:42:58 -07:00
|
|
|
*
|
2019-02-19 15:04:13 -08:00
|
|
|
* program VF MAC address
|
2018-09-19 17:42:58 -07:00
|
|
|
*/
|
|
|
|
int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
|
|
|
|
{
|
2019-11-08 06:23:27 -08:00
|
|
|
struct ice_pf *pf = ice_netdev_to_pf(netdev);
|
2018-09-19 17:42:58 -07:00
|
|
|
struct ice_vf *vf;
|
2020-02-18 13:22:06 -08:00
|
|
|
int ret;
|
2018-09-19 17:42:58 -07:00
|
|
|
|
2020-05-15 17:51:17 -07:00
|
|
|
if (is_multicast_ether_addr(mac)) {
|
2018-09-19 17:42:58 -07:00
|
|
|
netdev_err(netdev, "%pM not a valid unicast address\n", mac);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2022-02-16 13:37:37 -08:00
|
|
|
vf = ice_get_vf_by_id(pf, vf_id);
|
|
|
|
if (!vf)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2020-05-15 17:36:33 -07:00
|
|
|
/* nothing left to do, unicast MAC already set */
|
2023-01-18 17:16:53 -08:00
|
|
|
if (ether_addr_equal(vf->dev_lan_addr, mac) &&
|
|
|
|
ether_addr_equal(vf->hw_lan_addr, mac)) {
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
ret = 0;
|
|
|
|
goto out_put_vf;
|
|
|
|
}
|
2020-05-15 17:36:33 -07:00
|
|
|
|
2023-08-11 10:07:01 +02:00
|
|
|
ret = ice_check_vf_ready_for_cfg(vf);
|
2020-02-18 13:22:06 -08:00
|
|
|
if (ret)
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
goto out_put_vf;
|
2020-02-18 13:22:06 -08:00
|
|
|
|
2021-09-09 14:38:09 -07:00
|
|
|
mutex_lock(&vf->cfg_lock);
|
|
|
|
|
2020-05-15 17:51:17 -07:00
|
|
|
/* VF is notified of its new MAC via the PF's response to the
|
|
|
|
* VIRTCHNL_OP_GET_VF_RESOURCES message after the VF has been reset
|
2018-09-19 17:42:58 -07:00
|
|
|
*/
|
2023-01-18 17:16:53 -08:00
|
|
|
ether_addr_copy(vf->dev_lan_addr, mac);
|
|
|
|
ether_addr_copy(vf->hw_lan_addr, mac);
|
2020-05-15 17:51:17 -07:00
|
|
|
if (is_zero_ether_addr(mac)) {
|
|
|
|
/* VF will send VIRTCHNL_OP_ADD_ETH_ADDR message with its MAC */
|
|
|
|
vf->pf_set_mac = false;
|
|
|
|
netdev_info(netdev, "Removing MAC on VF %d. VF driver will be reinitialized\n",
|
|
|
|
vf->vf_id);
|
|
|
|
} else {
|
|
|
|
/* PF will add MAC rule for the VF */
|
|
|
|
vf->pf_set_mac = true;
|
|
|
|
netdev_info(netdev, "Setting MAC %pM on VF %d. VF driver will be reinitialized\n",
|
|
|
|
mac, vf_id);
|
|
|
|
}
|
2018-09-19 17:42:58 -07:00
|
|
|
|
2022-02-22 16:27:08 -08:00
|
|
|
ice_reset_vf(vf, ICE_VF_RESET_NOTIFY);
|
2021-09-09 14:38:09 -07:00
|
|
|
mutex_unlock(&vf->cfg_lock);
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
|
|
|
|
out_put_vf:
|
|
|
|
ice_put_vf(vf);
|
|
|
|
return ret;
|
2018-09-19 17:42:58 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_set_vf_trust
|
|
|
|
* @netdev: network interface device structure
|
|
|
|
* @vf_id: VF identifier
|
|
|
|
* @trusted: Boolean value to enable/disable trusted VF
|
|
|
|
*
|
|
|
|
* Enable or disable a given VF as trusted
|
|
|
|
*/
|
|
|
|
int ice_set_vf_trust(struct net_device *netdev, int vf_id, bool trusted)
|
|
|
|
{
|
2019-11-08 06:23:27 -08:00
|
|
|
struct ice_pf *pf = ice_netdev_to_pf(netdev);
|
2018-09-19 17:42:58 -07:00
|
|
|
struct ice_vf *vf;
|
2020-02-18 13:22:06 -08:00
|
|
|
int ret;
|
2018-09-19 17:42:58 -07:00
|
|
|
|
2023-03-10 12:33:44 +01:00
|
|
|
vf = ice_get_vf_by_id(pf, vf_id);
|
|
|
|
if (!vf)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2021-10-15 10:27:19 +02:00
|
|
|
if (ice_is_eswitch_mode_switchdev(pf)) {
|
|
|
|
dev_info(ice_pf_to_dev(pf), "Trusted VF is forbidden in switchdev mode\n");
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
2023-08-11 10:07:01 +02:00
|
|
|
ret = ice_check_vf_ready_for_cfg(vf);
|
2020-02-18 13:22:06 -08:00
|
|
|
if (ret)
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
goto out_put_vf;
|
2018-09-19 17:42:58 -07:00
|
|
|
|
|
|
|
/* Check if already trusted */
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
if (trusted == vf->trusted) {
|
|
|
|
ret = 0;
|
|
|
|
goto out_put_vf;
|
|
|
|
}
|
2018-09-19 17:42:58 -07:00
|
|
|
|
2021-09-09 14:38:09 -07:00
|
|
|
mutex_lock(&vf->cfg_lock);
|
|
|
|
|
2018-09-19 17:42:58 -07:00
|
|
|
vf->trusted = trusted;
|
2022-02-22 16:27:08 -08:00
|
|
|
ice_reset_vf(vf, ICE_VF_RESET_NOTIFY);
|
2020-02-06 01:20:10 -08:00
|
|
|
dev_info(ice_pf_to_dev(pf), "VF %u is now %strusted\n",
|
2018-09-19 17:42:58 -07:00
|
|
|
vf_id, trusted ? "" : "un");
|
|
|
|
|
2021-09-09 14:38:09 -07:00
|
|
|
mutex_unlock(&vf->cfg_lock);
|
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
out_put_vf:
|
|
|
|
ice_put_vf(vf);
|
|
|
|
return ret;
|
2018-09-19 17:42:58 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_set_vf_link_state
|
|
|
|
* @netdev: network interface device structure
|
|
|
|
* @vf_id: VF identifier
|
|
|
|
* @link_state: required link state
|
|
|
|
*
|
|
|
|
* Set VF's link state, irrespective of physical link state status
|
|
|
|
*/
|
|
|
|
int ice_set_vf_link_state(struct net_device *netdev, int vf_id, int link_state)
|
|
|
|
{
|
2019-11-08 06:23:27 -08:00
|
|
|
struct ice_pf *pf = ice_netdev_to_pf(netdev);
|
2018-09-19 17:42:58 -07:00
|
|
|
struct ice_vf *vf;
|
2020-02-18 13:22:06 -08:00
|
|
|
int ret;
|
2018-09-19 17:42:58 -07:00
|
|
|
|
2022-02-16 13:37:37 -08:00
|
|
|
vf = ice_get_vf_by_id(pf, vf_id);
|
|
|
|
if (!vf)
|
2018-09-19 17:42:58 -07:00
|
|
|
return -EINVAL;
|
|
|
|
|
2020-02-18 13:22:06 -08:00
|
|
|
ret = ice_check_vf_ready_for_cfg(vf);
|
|
|
|
if (ret)
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
goto out_put_vf;
|
2018-09-19 17:42:58 -07:00
|
|
|
|
|
|
|
switch (link_state) {
|
|
|
|
case IFLA_VF_LINK_STATE_AUTO:
|
|
|
|
vf->link_forced = false;
|
|
|
|
break;
|
|
|
|
case IFLA_VF_LINK_STATE_ENABLE:
|
|
|
|
vf->link_forced = true;
|
|
|
|
vf->link_up = true;
|
|
|
|
break;
|
|
|
|
case IFLA_VF_LINK_STATE_DISABLE:
|
|
|
|
vf->link_forced = true;
|
|
|
|
vf->link_up = false;
|
|
|
|
break;
|
|
|
|
default:
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto out_put_vf;
|
2018-09-19 17:42:58 -07:00
|
|
|
}
|
|
|
|
|
2019-12-12 03:13:01 -08:00
|
|
|
ice_vc_notify_vf_link_state(vf);
|
2018-09-19 17:42:58 -07:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
out_put_vf:
|
|
|
|
ice_put_vf(vf);
|
|
|
|
return ret;
|
2018-09-19 17:42:58 -07:00
|
|
|
}
|
2019-11-08 06:23:28 -08:00
|
|
|
|
2021-09-13 11:22:19 -07:00
|
|
|
/**
|
|
|
|
* ice_calc_all_vfs_min_tx_rate - calculate cumulative min Tx rate on all VFs
|
|
|
|
* @pf: PF associated with VFs
|
|
|
|
*/
|
|
|
|
static int ice_calc_all_vfs_min_tx_rate(struct ice_pf *pf)
|
|
|
|
{
|
2022-02-16 13:37:35 -08:00
|
|
|
struct ice_vf *vf;
|
|
|
|
unsigned int bkt;
|
|
|
|
int rate = 0;
|
2021-09-13 11:22:19 -07:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
rcu_read_lock();
|
|
|
|
ice_for_each_vf_rcu(pf, bkt, vf)
|
2022-02-16 13:37:35 -08:00
|
|
|
rate += vf->min_tx_rate;
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
rcu_read_unlock();
|
2021-09-13 11:22:19 -07:00
|
|
|
|
|
|
|
return rate;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_min_tx_rate_oversubscribed - check if min Tx rate causes oversubscription
|
|
|
|
* @vf: VF trying to configure min_tx_rate
|
|
|
|
* @min_tx_rate: min Tx rate in Mbps
|
|
|
|
*
|
|
|
|
* Check if the min_tx_rate being passed in will cause oversubscription of total
|
|
|
|
* min_tx_rate based on the current link speed and all other VFs configured
|
|
|
|
* min_tx_rate
|
|
|
|
*
|
|
|
|
* Return true if the passed min_tx_rate would cause oversubscription, else
|
|
|
|
* return false
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
ice_min_tx_rate_oversubscribed(struct ice_vf *vf, int min_tx_rate)
|
|
|
|
{
|
2022-04-11 16:29:03 -07:00
|
|
|
struct ice_vsi *vsi = ice_get_vf_vsi(vf);
|
|
|
|
int all_vfs_min_tx_rate;
|
|
|
|
int link_speed_mbps;
|
|
|
|
|
|
|
|
if (WARN_ON(!vsi))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
link_speed_mbps = ice_get_link_speed_mbps(vsi);
|
|
|
|
all_vfs_min_tx_rate = ice_calc_all_vfs_min_tx_rate(vf->pf);
|
2021-09-13 11:22:19 -07:00
|
|
|
|
|
|
|
/* this VF's previous rate is being overwritten */
|
|
|
|
all_vfs_min_tx_rate -= vf->min_tx_rate;
|
|
|
|
|
|
|
|
if (all_vfs_min_tx_rate + min_tx_rate > link_speed_mbps) {
|
|
|
|
dev_err(ice_pf_to_dev(vf->pf), "min_tx_rate of %d Mbps on VF %u would cause oversubscription of %d Mbps based on the current link speed %d Mbps\n",
|
|
|
|
min_tx_rate, vf->vf_id,
|
|
|
|
all_vfs_min_tx_rate + min_tx_rate - link_speed_mbps,
|
|
|
|
link_speed_mbps);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_set_vf_bw - set min/max VF bandwidth
|
|
|
|
* @netdev: network interface device structure
|
|
|
|
* @vf_id: VF identifier
|
|
|
|
* @min_tx_rate: Minimum Tx rate in Mbps
|
|
|
|
* @max_tx_rate: Maximum Tx rate in Mbps
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ice_set_vf_bw(struct net_device *netdev, int vf_id, int min_tx_rate,
|
|
|
|
int max_tx_rate)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = ice_netdev_to_pf(netdev);
|
|
|
|
struct ice_vsi *vsi;
|
|
|
|
struct device *dev;
|
|
|
|
struct ice_vf *vf;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
dev = ice_pf_to_dev(pf);
|
2022-02-16 13:37:37 -08:00
|
|
|
|
|
|
|
vf = ice_get_vf_by_id(pf, vf_id);
|
|
|
|
if (!vf)
|
2021-09-13 11:22:19 -07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ret = ice_check_vf_ready_for_cfg(vf);
|
|
|
|
if (ret)
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
goto out_put_vf;
|
2021-09-13 11:22:19 -07:00
|
|
|
|
|
|
|
vsi = ice_get_vf_vsi(vf);
|
2022-04-11 16:29:03 -07:00
|
|
|
if (!vsi) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out_put_vf;
|
|
|
|
}
|
2021-09-13 11:22:19 -07:00
|
|
|
|
|
|
|
if (min_tx_rate && ice_is_dcb_active(pf)) {
|
|
|
|
dev_err(dev, "DCB on PF is currently enabled. VF min Tx rate limiting not allowed on this PF.\n");
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
ret = -EOPNOTSUPP;
|
|
|
|
goto out_put_vf;
|
2021-09-13 11:22:19 -07:00
|
|
|
}
|
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
if (ice_min_tx_rate_oversubscribed(vf, min_tx_rate)) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out_put_vf;
|
|
|
|
}
|
2021-09-13 11:22:19 -07:00
|
|
|
|
|
|
|
if (vf->min_tx_rate != (unsigned int)min_tx_rate) {
|
|
|
|
ret = ice_set_min_bw_limit(vsi, (u64)min_tx_rate * 1000);
|
|
|
|
if (ret) {
|
|
|
|
dev_err(dev, "Unable to set min-tx-rate for VF %d\n",
|
|
|
|
vf->vf_id);
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
goto out_put_vf;
|
2021-09-13 11:22:19 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
vf->min_tx_rate = min_tx_rate;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (vf->max_tx_rate != (unsigned int)max_tx_rate) {
|
|
|
|
ret = ice_set_max_bw_limit(vsi, (u64)max_tx_rate * 1000);
|
|
|
|
if (ret) {
|
|
|
|
dev_err(dev, "Unable to set max-tx-rate for VF %d\n",
|
|
|
|
vf->vf_id);
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
goto out_put_vf;
|
2021-09-13 11:22:19 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
vf->max_tx_rate = max_tx_rate;
|
|
|
|
}
|
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
out_put_vf:
|
|
|
|
ice_put_vf(vf);
|
|
|
|
return ret;
|
2021-09-13 11:22:19 -07:00
|
|
|
}
|
|
|
|
|
2019-11-08 06:23:28 -08:00
|
|
|
/**
|
|
|
|
* ice_get_vf_stats - populate some stats for the VF
|
|
|
|
* @netdev: the netdev of the PF
|
|
|
|
* @vf_id: the host OS identifier (0-255)
|
|
|
|
* @vf_stats: pointer to the OS memory to be initialized
|
|
|
|
*/
|
|
|
|
int ice_get_vf_stats(struct net_device *netdev, int vf_id,
|
|
|
|
struct ifla_vf_stats *vf_stats)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = ice_netdev_to_pf(netdev);
|
|
|
|
struct ice_eth_stats *stats;
|
|
|
|
struct ice_vsi *vsi;
|
|
|
|
struct ice_vf *vf;
|
2020-02-18 13:22:06 -08:00
|
|
|
int ret;
|
2019-11-08 06:23:28 -08:00
|
|
|
|
2022-02-16 13:37:37 -08:00
|
|
|
vf = ice_get_vf_by_id(pf, vf_id);
|
|
|
|
if (!vf)
|
2019-11-08 06:23:28 -08:00
|
|
|
return -EINVAL;
|
|
|
|
|
2020-02-18 13:22:06 -08:00
|
|
|
ret = ice_check_vf_ready_for_cfg(vf);
|
|
|
|
if (ret)
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
goto out_put_vf;
|
2019-11-08 06:23:28 -08:00
|
|
|
|
2021-03-02 10:15:39 -08:00
|
|
|
vsi = ice_get_vf_vsi(vf);
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
if (!vsi) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out_put_vf;
|
|
|
|
}
|
2019-11-08 06:23:28 -08:00
|
|
|
|
|
|
|
ice_update_eth_stats(vsi);
|
|
|
|
stats = &vsi->eth_stats;
|
|
|
|
|
|
|
|
memset(vf_stats, 0, sizeof(*vf_stats));
|
|
|
|
|
|
|
|
vf_stats->rx_packets = stats->rx_unicast + stats->rx_broadcast +
|
|
|
|
stats->rx_multicast;
|
|
|
|
vf_stats->tx_packets = stats->tx_unicast + stats->tx_broadcast +
|
|
|
|
stats->tx_multicast;
|
|
|
|
vf_stats->rx_bytes = stats->rx_bytes;
|
|
|
|
vf_stats->tx_bytes = stats->tx_bytes;
|
|
|
|
vf_stats->broadcast = stats->rx_broadcast;
|
|
|
|
vf_stats->multicast = stats->rx_multicast;
|
|
|
|
vf_stats->rx_dropped = stats->rx_discards;
|
|
|
|
vf_stats->tx_dropped = stats->tx_discards;
|
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
out_put_vf:
|
|
|
|
ice_put_vf(vf);
|
|
|
|
return ret;
|
2019-11-08 06:23:28 -08:00
|
|
|
}
|
2020-02-13 13:31:16 -08:00
|
|
|
|
2022-02-22 16:26:55 -08:00
|
|
|
/**
|
|
|
|
* ice_is_supported_port_vlan_proto - make sure the vlan_proto is supported
|
|
|
|
* @hw: hardware structure used to check the VLAN mode
|
|
|
|
* @vlan_proto: VLAN TPID being checked
|
|
|
|
*
|
|
|
|
* If the device is configured in Double VLAN Mode (DVM), then both ETH_P_8021Q
|
|
|
|
* and ETH_P_8021AD are supported. If the device is configured in Single VLAN
|
|
|
|
* Mode (SVM), then only ETH_P_8021Q is supported.
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
ice_is_supported_port_vlan_proto(struct ice_hw *hw, u16 vlan_proto)
|
|
|
|
{
|
|
|
|
bool is_supported = false;
|
|
|
|
|
|
|
|
switch (vlan_proto) {
|
|
|
|
case ETH_P_8021Q:
|
|
|
|
is_supported = true;
|
|
|
|
break;
|
|
|
|
case ETH_P_8021AD:
|
|
|
|
if (ice_is_dvm_ena(hw))
|
|
|
|
is_supported = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return is_supported;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_set_vf_port_vlan
|
|
|
|
* @netdev: network interface device structure
|
|
|
|
* @vf_id: VF identifier
|
|
|
|
* @vlan_id: VLAN ID being set
|
|
|
|
* @qos: priority setting
|
|
|
|
* @vlan_proto: VLAN protocol
|
|
|
|
*
|
|
|
|
* program VF Port VLAN ID and/or QoS
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos,
|
|
|
|
__be16 vlan_proto)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = ice_netdev_to_pf(netdev);
|
|
|
|
u16 local_vlan_proto = ntohs(vlan_proto);
|
|
|
|
struct device *dev;
|
|
|
|
struct ice_vf *vf;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
dev = ice_pf_to_dev(pf);
|
|
|
|
|
|
|
|
if (vlan_id >= VLAN_N_VID || qos > 7) {
|
|
|
|
dev_err(dev, "Invalid Port VLAN parameters for VF %d, ID %d, QoS %d\n",
|
|
|
|
vf_id, vlan_id, qos);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ice_is_supported_port_vlan_proto(&pf->hw, local_vlan_proto)) {
|
|
|
|
dev_err(dev, "VF VLAN protocol 0x%04x is not supported\n",
|
|
|
|
local_vlan_proto);
|
|
|
|
return -EPROTONOSUPPORT;
|
|
|
|
}
|
|
|
|
|
|
|
|
vf = ice_get_vf_by_id(pf, vf_id);
|
|
|
|
if (!vf)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2023-08-11 10:07:01 +02:00
|
|
|
ret = ice_check_vf_ready_for_cfg(vf);
|
2022-02-22 16:26:55 -08:00
|
|
|
if (ret)
|
|
|
|
goto out_put_vf;
|
|
|
|
|
|
|
|
if (ice_vf_get_port_vlan_prio(vf) == qos &&
|
|
|
|
ice_vf_get_port_vlan_tpid(vf) == local_vlan_proto &&
|
|
|
|
ice_vf_get_port_vlan_id(vf) == vlan_id) {
|
|
|
|
/* duplicate request, so just return success */
|
|
|
|
dev_dbg(dev, "Duplicate port VLAN %u, QoS %u, TPID 0x%04x request\n",
|
|
|
|
vlan_id, qos, local_vlan_proto);
|
|
|
|
ret = 0;
|
|
|
|
goto out_put_vf;
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex_lock(&vf->cfg_lock);
|
|
|
|
|
|
|
|
vf->port_vlan_info = ICE_VLAN(local_vlan_proto, vlan_id, qos);
|
|
|
|
if (ice_vf_is_port_vlan_ena(vf))
|
|
|
|
dev_info(dev, "Setting VLAN %u, QoS %u, TPID 0x%04x on VF %d\n",
|
|
|
|
vlan_id, qos, local_vlan_proto, vf_id);
|
|
|
|
else
|
|
|
|
dev_info(dev, "Clearing port VLAN on VF %d\n", vf_id);
|
|
|
|
|
2022-02-22 16:27:08 -08:00
|
|
|
ice_reset_vf(vf, ICE_VF_RESET_NOTIFY);
|
2022-02-22 16:26:55 -08:00
|
|
|
mutex_unlock(&vf->cfg_lock);
|
|
|
|
|
|
|
|
out_put_vf:
|
|
|
|
ice_put_vf(vf);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-05-15 17:36:31 -07:00
|
|
|
/**
|
|
|
|
* ice_print_vf_rx_mdd_event - print VF Rx malicious driver detect event
|
|
|
|
* @vf: pointer to the VF structure
|
|
|
|
*/
|
|
|
|
void ice_print_vf_rx_mdd_event(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = vf->pf;
|
|
|
|
struct device *dev;
|
|
|
|
|
|
|
|
dev = ice_pf_to_dev(pf);
|
|
|
|
|
|
|
|
dev_info(dev, "%d Rx Malicious Driver Detection events detected on PF %d VF %d MAC %pM. mdd-auto-reset-vfs=%s\n",
|
|
|
|
vf->mdd_rx_events.count, pf->hw.pf_id, vf->vf_id,
|
2023-01-18 17:16:53 -08:00
|
|
|
vf->dev_lan_addr,
|
2020-05-15 17:36:31 -07:00
|
|
|
test_bit(ICE_FLAG_MDD_AUTO_RESET_VF, pf->flags)
|
|
|
|
? "on" : "off");
|
|
|
|
}
|
|
|
|
|
2024-04-04 16:04:51 +02:00
|
|
|
/**
|
|
|
|
* ice_print_vf_tx_mdd_event - print VF Tx malicious driver detect event
|
|
|
|
* @vf: pointer to the VF structure
|
|
|
|
*/
|
|
|
|
void ice_print_vf_tx_mdd_event(struct ice_vf *vf)
|
|
|
|
{
|
|
|
|
struct ice_pf *pf = vf->pf;
|
|
|
|
struct device *dev;
|
|
|
|
|
|
|
|
dev = ice_pf_to_dev(pf);
|
|
|
|
|
|
|
|
dev_info(dev, "%d Tx Malicious Driver Detection events detected on PF %d VF %d MAC %pM. mdd-auto-reset-vfs=%s\n",
|
|
|
|
vf->mdd_tx_events.count, pf->hw.pf_id, vf->vf_id,
|
|
|
|
vf->dev_lan_addr,
|
|
|
|
test_bit(ICE_FLAG_MDD_AUTO_RESET_VF, pf->flags)
|
|
|
|
? "on" : "off");
|
|
|
|
}
|
|
|
|
|
2020-02-13 13:31:16 -08:00
|
|
|
/**
|
2021-03-02 10:15:45 -08:00
|
|
|
* ice_print_vfs_mdd_events - print VFs malicious driver detect event
|
2020-02-13 13:31:16 -08:00
|
|
|
* @pf: pointer to the PF structure
|
|
|
|
*
|
|
|
|
* Called from ice_handle_mdd_event to rate limit and print VFs MDD events.
|
|
|
|
*/
|
|
|
|
void ice_print_vfs_mdd_events(struct ice_pf *pf)
|
|
|
|
{
|
2022-02-16 13:37:35 -08:00
|
|
|
struct ice_vf *vf;
|
|
|
|
unsigned int bkt;
|
2020-02-13 13:31:16 -08:00
|
|
|
|
|
|
|
/* check that there are pending MDD events to print */
|
2021-03-02 10:15:38 -08:00
|
|
|
if (!test_and_clear_bit(ICE_MDD_VF_PRINT_PENDING, pf->state))
|
2020-02-13 13:31:16 -08:00
|
|
|
return;
|
|
|
|
|
|
|
|
/* VF MDD event logs are rate limited to one second intervals */
|
2022-02-16 13:37:36 -08:00
|
|
|
if (time_is_after_jiffies(pf->vfs.last_printed_mdd_jiffies + HZ * 1))
|
2020-02-13 13:31:16 -08:00
|
|
|
return;
|
|
|
|
|
2022-02-16 13:37:36 -08:00
|
|
|
pf->vfs.last_printed_mdd_jiffies = jiffies;
|
2020-02-13 13:31:16 -08:00
|
|
|
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
mutex_lock(&pf->vfs.table_lock);
|
2022-02-16 13:37:35 -08:00
|
|
|
ice_for_each_vf(pf, bkt, vf) {
|
2020-02-13 13:31:16 -08:00
|
|
|
/* only print Rx MDD event message if there are new events */
|
|
|
|
if (vf->mdd_rx_events.count != vf->mdd_rx_events.last_printed) {
|
|
|
|
vf->mdd_rx_events.last_printed =
|
|
|
|
vf->mdd_rx_events.count;
|
2020-05-15 17:36:31 -07:00
|
|
|
ice_print_vf_rx_mdd_event(vf);
|
2020-02-13 13:31:16 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* only print Tx MDD event message if there are new events */
|
|
|
|
if (vf->mdd_tx_events.count != vf->mdd_tx_events.last_printed) {
|
|
|
|
vf->mdd_tx_events.last_printed =
|
|
|
|
vf->mdd_tx_events.count;
|
2024-04-04 16:04:51 +02:00
|
|
|
ice_print_vf_tx_mdd_event(vf);
|
2020-02-13 13:31:16 -08:00
|
|
|
}
|
|
|
|
}
|
ice: convert VF storage to hash table with krefs and RCU
The ice driver stores VF structures in a simple array which is allocated
once at the time of VF creation. The VF structures are then accessed
from the array by their VF ID. The ID must be between 0 and the number
of allocated VFs.
Multiple threads can access this table:
* .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust
* interrupts, such as due to messages from the VF using the virtchnl
communication
* processing such as device reset
* commands to add or remove VFs
The current implementation does not keep track of when all threads are
done operating on a VF and can potentially result in use-after-free
issues caused by one thread accessing a VF structure after it has been
released when removing VFs. Some of these are prevented with various
state flags and checks.
In addition, this structure is quite static and does not support a
planned future where virtualization can be more dynamic. As we begin to
look at supporting Scalable IOV with the ice driver (as opposed to just
supporting Single Root IOV), this structure is not sufficient.
In the future, VFs will be able to be added and removed individually and
dynamically.
To allow for this, and to better protect against a whole class of
use-after-free bugs, replace the VF storage with a combination of a hash
table and krefs to reference track all of the accesses to VFs through
the hash table.
A hash table still allows efficient look up of the VF given its ID, but
also allows adding and removing VFs. It does not require contiguous VF
IDs.
The use of krefs allows the cleanup of the VF memory to be delayed until
after all threads have released their reference (by calling ice_put_vf).
To prevent corruption of the hash table, a combination of RCU and the
mutex table_lock are used. Addition and removal from the hash table use
the RCU-aware hash macros. This allows simple read-only look ups that
iterate to locate a single VF can be fast using RCU. Accesses which
modify the hash table, or which can't take RCU because they sleep, will
hold the mutex lock.
By using this design, we have a stronger guarantee that the VF structure
can't be released until after all threads are finished operating on it.
We also pave the way for the more dynamic Scalable IOV implementation in
the future.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-16 13:37:38 -08:00
|
|
|
mutex_unlock(&pf->vfs.table_lock);
|
2020-02-13 13:31:16 -08:00
|
|
|
}
|
2020-07-13 13:53:07 -07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* ice_restore_all_vfs_msi_state - restore VF MSI state after PF FLR
|
2023-10-19 10:32:19 -07:00
|
|
|
* @pf: pointer to the PF structure
|
2020-07-13 13:53:07 -07:00
|
|
|
*
|
|
|
|
* Called when recovering from a PF FLR to restore interrupt capability to
|
|
|
|
* the VFs.
|
|
|
|
*/
|
2023-10-19 10:32:19 -07:00
|
|
|
void ice_restore_all_vfs_msi_state(struct ice_pf *pf)
|
2020-07-13 13:53:07 -07:00
|
|
|
{
|
2023-10-19 10:32:19 -07:00
|
|
|
struct ice_vf *vf;
|
|
|
|
u32 bkt;
|
2020-07-13 13:53:07 -07:00
|
|
|
|
2023-10-19 10:32:19 -07:00
|
|
|
ice_for_each_vf(pf, bkt, vf)
|
|
|
|
pci_restore_msi_state(vf->vfdev);
|
2020-07-13 13:53:07 -07:00
|
|
|
}
|