2018-03-20 07:58:06 -07:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
|
|
|
/* Copyright (c) 2018, Intel Corporation. */
|
|
|
|
|
|
|
|
#ifndef _ICE_COMMON_H_
|
|
|
|
#define _ICE_COMMON_H_
|
|
|
|
|
|
|
|
#include "ice.h"
|
|
|
|
#include "ice_type.h"
|
2019-10-09 07:09:42 -07:00
|
|
|
#include "ice_nvm.h"
|
2019-09-09 06:47:44 -07:00
|
|
|
#include "ice_flex_pipe.h"
|
ice: Get switch config, scheduler config and device capabilities
This patch adds to the initialization flow by getting switch
configuration, scheduler configuration and device capabilities.
Switch configuration:
On boot, an L2 switch element is created in the firmware per physical
function. Each physical function is also mapped to a port, to which its
switch element is connected. In other words, this switch can be visualized
as an embedded vSwitch that can connect a physical function's virtual
station interfaces (VSIs) to the egress/ingress port. Egress/ingress
filters will be eventually created and applied on this switch element.
As part of the initialization flow, the driver gets configuration data
from this switch element and stores it.
Scheduler configuration:
The Tx scheduler is a subsystem responsible for setting and enforcing QoS.
As part of the initialization flow, the driver queries and stores the
default scheduler configuration for the given physical function.
Device capabilities:
As part of initialization, the driver has to determine what the device is
capable of (ex. max queues, VSIs, etc). This information is obtained from
the firmware and stored by the driver.
CC: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@oracle.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-03-20 07:58:08 -07:00
|
|
|
#include "ice_switch.h"
|
2018-09-19 17:42:55 -07:00
|
|
|
#include <linux/avf/virtchnl.h>
|
2018-03-20 07:58:06 -07:00
|
|
|
|
2021-03-25 15:35:04 -07:00
|
|
|
#define ICE_SQ_SEND_DELAY_TIME_MS 10
|
|
|
|
#define ICE_SQ_SEND_MAX_EXECUTE 3
|
|
|
|
|
2018-03-20 07:58:07 -07:00
|
|
|
enum ice_status ice_init_hw(struct ice_hw *hw);
|
|
|
|
void ice_deinit_hw(struct ice_hw *hw);
|
|
|
|
enum ice_status ice_check_reset(struct ice_hw *hw);
|
|
|
|
enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
|
ice: separate out control queue lock creation
The ice_init_all_ctrlq and ice_shutdown_all_ctrlq functions create and
destroy the locks used to protect the send and receive process of each
control queue.
This is problematic, as the driver may use these functions to shutdown
and re-initialize the control queues at run time. For example, it may do
this in response to a device reset.
If the driver failed to recover from a reset, it might leave the control
queues offline. In this case, the locks will no longer be initialized.
A later call to ice_sq_send_cmd will then attempt to acquire a lock that
has been destroyed.
It is incorrect behavior to access a lock that has been destroyed.
Indeed, ice_aq_send_cmd already tries to avoid accessing an offline
control queue, but the check occurs inside the lock.
The root of the problem is that the locks are destroyed at run time.
Modify ice_init_all_ctrlq and ice_shutdown_all_ctrlq such that they no
longer create or destroy the locks.
Introduce new functions, ice_create_all_ctrlq and ice_destroy_all_ctrlq.
Call these functions in ice_init_hw and ice_deinit_hw.
Now, the control queue locks will remain valid for the life of the
driver, and will not be destroyed until the driver unloads.
This also allows removing a duplicate check of the sq.count and
rq.count values when shutting down the controlqs. The ice_shutdown_ctrlq
function already checks this value under the lock. Previously
commit dec64ff10ed9 ("ice: use [sr]q.count when checking if queue is
initialized") needed this check to happen outside the lock, because it
prevented duplicate attempts at destroying the locks.
The driver may now safely use ice_init_all_ctrlq and
ice_shutdown_all_ctrlq while handling reset events, without causing the
locks to be invalid.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-06-26 02:20:16 -07:00
|
|
|
enum ice_status ice_create_all_ctrlq(struct ice_hw *hw);
|
2018-03-20 07:58:06 -07:00
|
|
|
enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
|
|
|
|
void ice_shutdown_all_ctrlq(struct ice_hw *hw);
|
ice: separate out control queue lock creation
The ice_init_all_ctrlq and ice_shutdown_all_ctrlq functions create and
destroy the locks used to protect the send and receive process of each
control queue.
This is problematic, as the driver may use these functions to shutdown
and re-initialize the control queues at run time. For example, it may do
this in response to a device reset.
If the driver failed to recover from a reset, it might leave the control
queues offline. In this case, the locks will no longer be initialized.
A later call to ice_sq_send_cmd will then attempt to acquire a lock that
has been destroyed.
It is incorrect behavior to access a lock that has been destroyed.
Indeed, ice_aq_send_cmd already tries to avoid accessing an offline
control queue, but the check occurs inside the lock.
The root of the problem is that the locks are destroyed at run time.
Modify ice_init_all_ctrlq and ice_shutdown_all_ctrlq such that they no
longer create or destroy the locks.
Introduce new functions, ice_create_all_ctrlq and ice_destroy_all_ctrlq.
Call these functions in ice_init_hw and ice_deinit_hw.
Now, the control queue locks will remain valid for the life of the
driver, and will not be destroyed until the driver unloads.
This also allows removing a duplicate check of the sq.count and
rq.count values when shutting down the controlqs. The ice_shutdown_ctrlq
function already checks this value under the lock. Previously
commit dec64ff10ed9 ("ice: use [sr]q.count when checking if queue is
initialized") needed this check to happen outside the lock, because it
prevented duplicate attempts at destroying the locks.
The driver may now safely use ice_init_all_ctrlq and
ice_shutdown_all_ctrlq while handling reset events, without causing the
locks to be invalid.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-06-26 02:20:16 -07:00
|
|
|
void ice_destroy_all_ctrlq(struct ice_hw *hw);
|
2018-03-20 07:58:06 -07:00
|
|
|
enum ice_status
|
2018-03-20 07:58:10 -07:00
|
|
|
ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
|
|
|
|
struct ice_rq_event_info *e, u16 *pending);
|
|
|
|
enum ice_status
|
2018-03-20 07:58:18 -07:00
|
|
|
ice_get_link_status(struct ice_port_info *pi, bool *link_up);
|
2018-09-19 17:23:15 -07:00
|
|
|
enum ice_status ice_update_link_info(struct ice_port_info *pi);
|
2018-03-20 07:58:18 -07:00
|
|
|
enum ice_status
|
2018-03-20 07:58:07 -07:00
|
|
|
ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
|
2018-08-09 06:29:46 -07:00
|
|
|
enum ice_aq_res_access_type access, u32 timeout);
|
2018-03-20 07:58:07 -07:00
|
|
|
void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res);
|
2020-01-17 07:39:13 -08:00
|
|
|
enum ice_status
|
|
|
|
ice_alloc_hw_res(struct ice_hw *hw, u16 type, u16 num, bool btm, u16 *res);
|
2020-01-17 07:39:14 -08:00
|
|
|
enum ice_status
|
|
|
|
ice_free_hw_res(struct ice_hw *hw, u16 type, u16 num, u16 *res);
|
2018-03-20 07:58:07 -07:00
|
|
|
enum ice_status
|
2020-01-17 07:39:13 -08:00
|
|
|
ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
|
|
|
|
struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
|
|
|
|
enum ice_adminq_opc opc, struct ice_sq_cd *cd);
|
2021-06-09 09:39:46 -07:00
|
|
|
bool ice_is_sbq_supported(struct ice_hw *hw);
|
|
|
|
struct ice_ctl_q_info *ice_get_sbq(struct ice_hw *hw);
|
2020-01-17 07:39:13 -08:00
|
|
|
enum ice_status
|
2018-03-20 07:58:06 -07:00
|
|
|
ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
|
|
|
|
struct ice_aq_desc *desc, void *buf, u16 buf_size,
|
|
|
|
struct ice_sq_cd *cd);
|
2018-03-20 07:58:07 -07:00
|
|
|
void ice_clear_pxe_mode(struct ice_hw *hw);
|
ice: Get switch config, scheduler config and device capabilities
This patch adds to the initialization flow by getting switch
configuration, scheduler configuration and device capabilities.
Switch configuration:
On boot, an L2 switch element is created in the firmware per physical
function. Each physical function is also mapped to a port, to which its
switch element is connected. In other words, this switch can be visualized
as an embedded vSwitch that can connect a physical function's virtual
station interfaces (VSIs) to the egress/ingress port. Egress/ingress
filters will be eventually created and applied on this switch element.
As part of the initialization flow, the driver gets configuration data
from this switch element and stores it.
Scheduler configuration:
The Tx scheduler is a subsystem responsible for setting and enforcing QoS.
As part of the initialization flow, the driver queries and stores the
default scheduler configuration for the given physical function.
Device capabilities:
As part of initialization, the driver has to determine what the device is
capable of (ex. max queues, VSIs, etc). This information is obtained from
the firmware and stored by the driver.
CC: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@oracle.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-03-20 07:58:08 -07:00
|
|
|
enum ice_status ice_get_caps(struct ice_hw *hw);
|
2018-10-18 08:37:07 -07:00
|
|
|
|
2019-09-09 06:47:46 -07:00
|
|
|
void ice_set_safe_mode_caps(struct ice_hw *hw);
|
|
|
|
|
2018-03-20 07:58:13 -07:00
|
|
|
enum ice_status
|
|
|
|
ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
|
|
|
|
u32 rxq_index);
|
2018-03-20 07:58:15 -07:00
|
|
|
|
|
|
|
enum ice_status
|
2021-03-02 10:15:35 -08:00
|
|
|
ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params);
|
2018-03-20 07:58:15 -07:00
|
|
|
enum ice_status
|
2021-03-02 10:15:35 -08:00
|
|
|
ice_aq_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *set_params);
|
2018-03-20 07:58:15 -07:00
|
|
|
enum ice_status
|
2018-09-19 17:23:13 -07:00
|
|
|
ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
|
2018-03-20 07:58:15 -07:00
|
|
|
struct ice_aqc_get_set_rss_keys *keys);
|
|
|
|
enum ice_status
|
2018-09-19 17:23:13 -07:00
|
|
|
ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
|
2018-03-20 07:58:15 -07:00
|
|
|
struct ice_aqc_get_set_rss_keys *keys);
|
2018-09-19 17:23:13 -07:00
|
|
|
|
2018-03-20 07:58:06 -07:00
|
|
|
bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
|
|
|
|
enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
|
|
|
|
void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
|
2018-03-20 07:58:13 -07:00
|
|
|
extern const struct ice_ctx_ele ice_tlan_ctx_info[];
|
|
|
|
enum ice_status
|
2020-05-15 17:42:18 -07:00
|
|
|
ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx,
|
|
|
|
const struct ice_ctx_ele *ce_info);
|
2019-09-09 06:47:44 -07:00
|
|
|
|
|
|
|
extern struct mutex ice_global_cfg_lock_sw;
|
|
|
|
|
2018-03-20 07:58:06 -07:00
|
|
|
enum ice_status
|
|
|
|
ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc,
|
|
|
|
void *buf, u16 buf_size, struct ice_sq_cd *cd);
|
|
|
|
enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd);
|
2018-08-09 06:29:51 -07:00
|
|
|
|
2019-09-09 06:47:42 -07:00
|
|
|
enum ice_status
|
|
|
|
ice_aq_send_driver_ver(struct ice_hw *hw, struct ice_driver_ver *dv,
|
|
|
|
struct ice_sq_cd *cd);
|
2018-08-09 06:29:51 -07:00
|
|
|
enum ice_status
|
|
|
|
ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
|
|
|
|
struct ice_aqc_get_phy_caps_data *caps,
|
|
|
|
struct ice_sq_cd *cd);
|
2020-06-18 11:46:11 -07:00
|
|
|
enum ice_status
|
|
|
|
ice_aq_list_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
|
|
|
|
enum ice_adminq_opc opc, struct ice_sq_cd *cd);
|
ice: implement device flash update via devlink
Use the newly added pldmfw library to implement device flash update for
the Intel ice networking device driver. This support uses the devlink
flash update interface.
The main parts of the flash include the Option ROM, the netlist module,
and the main NVM data. The PLDM firmware file contains modules for each
of these components.
Using the pldmfw library, the provided firmware file will be scanned for
the three major components, "fw.undi" for the Option ROM, "fw.mgmt" for
the main NVM module containing the primary device firmware, and
"fw.netlist" containing the netlist module.
The flash is separated into two banks, the active bank containing the
running firmware, and the inactive bank which we use for update. Each
module is updated in a staged process. First, the inactive bank is
erased, preparing the device for update. Second, the contents of the
component are copied to the inactive portion of the flash. After all
components are updated, the driver signals the device to switch the
active bank during the next EMP reset (which would usually occur during
the next reboot).
Although the firmware AdminQ interface does report an immediate status
for each command, the NVM erase and NVM write commands receive status
asynchronously. The driver must not continue writing until previous
erase and write commands have finished. The real status of the NVM
commands is returned over the receive AdminQ. Implement a simple
interface that uses a wait queue so that the main update thread can
sleep until the completion status is reported by firmware. For erasing
the inactive banks, this can take quite a while in practice.
To help visualize the process to the devlink application and other
applications based on the devlink netlink interface, status is reported
via the devlink_flash_update_status_notify. While we do report status
after each 4k block when writing, there is no real status we can report
during erasing. We simply must wait for the complete module erasure to
finish.
With this implementation, basic flash update for the ice hardware is
supported.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-23 17:22:03 -07:00
|
|
|
enum ice_status
|
|
|
|
ice_discover_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_caps);
|
2018-08-09 06:29:51 -07:00
|
|
|
void
|
2018-12-19 10:03:33 -08:00
|
|
|
ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
|
|
|
|
u16 link_speeds_bitmap);
|
2018-03-20 07:58:19 -07:00
|
|
|
enum ice_status
|
2018-12-19 10:03:34 -08:00
|
|
|
ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
|
2018-03-20 07:58:19 -07:00
|
|
|
struct ice_sq_cd *cd);
|
2018-03-20 07:58:07 -07:00
|
|
|
enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
|
2018-03-20 07:58:09 -07:00
|
|
|
enum ice_status
|
2020-07-09 09:16:06 -07:00
|
|
|
ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
|
2018-08-09 06:29:51 -07:00
|
|
|
struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
|
2020-07-09 09:16:07 -07:00
|
|
|
bool ice_fw_supports_link_override(struct ice_hw *hw);
|
|
|
|
enum ice_status
|
|
|
|
ice_get_link_default_override(struct ice_link_default_override_tlv *ldo,
|
|
|
|
struct ice_port_info *pi);
|
2020-07-09 09:16:10 -07:00
|
|
|
bool ice_is_phy_caps_an_enabled(struct ice_aqc_get_phy_caps_data *caps);
|
2020-07-09 09:16:07 -07:00
|
|
|
|
2020-07-09 09:16:06 -07:00
|
|
|
enum ice_fc_mode ice_caps_to_fc_mode(u8 caps);
|
|
|
|
enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options);
|
2018-08-09 06:29:51 -07:00
|
|
|
enum ice_status
|
|
|
|
ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
|
|
|
|
bool ena_auto_link_update);
|
2020-07-09 09:16:05 -07:00
|
|
|
enum ice_status
|
2020-07-09 09:16:06 -07:00
|
|
|
ice_cfg_phy_fc(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
|
|
|
|
enum ice_fc_mode fc);
|
|
|
|
bool
|
|
|
|
ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *caps,
|
|
|
|
struct ice_aqc_set_phy_cfg_data *cfg);
|
2019-04-16 10:34:52 -07:00
|
|
|
void
|
2020-07-09 09:16:07 -07:00
|
|
|
ice_copy_phy_caps_to_cfg(struct ice_port_info *pi,
|
|
|
|
struct ice_aqc_get_phy_caps_data *caps,
|
2019-04-16 10:34:52 -07:00
|
|
|
struct ice_aqc_set_phy_cfg_data *cfg);
|
2018-03-20 07:58:16 -07:00
|
|
|
enum ice_status
|
2020-07-09 09:16:06 -07:00
|
|
|
ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
|
|
|
|
enum ice_fec_mode fec);
|
|
|
|
enum ice_status
|
2018-03-20 07:58:16 -07:00
|
|
|
ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
|
|
|
|
struct ice_sq_cd *cd);
|
2019-02-26 16:35:23 -08:00
|
|
|
enum ice_status
|
2020-05-15 17:36:30 -07:00
|
|
|
ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd);
|
|
|
|
enum ice_status
|
2019-02-26 16:35:23 -08:00
|
|
|
ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
|
|
|
|
struct ice_link_status *link, struct ice_sq_cd *cd);
|
|
|
|
enum ice_status
|
|
|
|
ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
|
|
|
|
struct ice_sq_cd *cd);
|
2019-04-16 10:30:43 -07:00
|
|
|
enum ice_status
|
|
|
|
ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
|
|
|
|
|
2018-03-20 07:58:16 -07:00
|
|
|
enum ice_status
|
2018-12-19 10:03:23 -08:00
|
|
|
ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
|
|
|
|
struct ice_sq_cd *cd);
|
2019-10-09 07:09:40 -07:00
|
|
|
enum ice_status
|
|
|
|
ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
|
|
|
|
u16 mem_addr, u8 page, u8 set_page, u8 *data, u8 length,
|
|
|
|
bool write, struct ice_sq_cd *cd);
|
2018-12-19 10:03:23 -08:00
|
|
|
|
2021-05-20 09:37:50 -05:00
|
|
|
int
|
|
|
|
ice_cfg_vsi_rdma(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap,
|
|
|
|
u16 *max_rdmaqs);
|
|
|
|
int
|
|
|
|
ice_ena_vsi_rdma_qset(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
|
|
|
|
u16 *rdma_qset, u16 num_qsets, u32 *qset_teid);
|
|
|
|
int
|
|
|
|
ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid,
|
|
|
|
u16 *q_id);
|
2018-12-19 10:03:23 -08:00
|
|
|
enum ice_status
|
2019-02-28 15:25:48 -08:00
|
|
|
ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
|
|
|
|
u16 *q_handle, u16 *q_ids, u32 *q_teids,
|
|
|
|
enum ice_disq_rst_src rst_src, u16 vmvf_num,
|
|
|
|
struct ice_sq_cd *cd);
|
2018-03-20 07:58:13 -07:00
|
|
|
enum ice_status
|
2018-09-19 17:23:13 -07:00
|
|
|
ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
|
2018-03-20 07:58:17 -07:00
|
|
|
u16 *max_lanqs);
|
|
|
|
enum ice_status
|
2019-02-28 15:25:48 -08:00
|
|
|
ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle,
|
|
|
|
u8 num_qgrps, struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
|
2018-03-20 07:58:13 -07:00
|
|
|
struct ice_sq_cd *cd);
|
ice: Implement VSI replay framework
Currently, switch filters get replayed after reset. In addition to
filters, other VSI attributes (like RSS configuration, Tx scheduler
configuration, etc.) also need to be replayed after reset.
Thus, instead of replaying based on functional blocks (i.e. replay
all filters for all VSIs, followed by RSS configuration replay for
all VSIs, and so on), it makes more sense to have the replay centered
around a VSI. In other words, replay all configurations for a VSI before
moving on to rebuilding the next VSI.
To that effect, this patch introduces a VSI replay framework in a new
function ice_vsi_replay_all. Currently it only replays switch filters,
but it will be expanded in the future to replay additional VSI attributes.
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-09-19 17:23:14 -07:00
|
|
|
enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
|
|
|
|
void ice_replay_post(struct ice_hw *hw);
|
2018-08-09 06:29:55 -07:00
|
|
|
void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
|
2019-11-06 02:05:28 -08:00
|
|
|
struct ice_q_ctx *
|
|
|
|
ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle);
|
2021-06-09 09:39:46 -07:00
|
|
|
int ice_sbq_rw_reg(struct ice_hw *hw, struct ice_sbq_msg_input *in);
|
2019-02-26 16:35:11 -08:00
|
|
|
void
|
2019-06-26 02:20:13 -07:00
|
|
|
ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
|
|
|
|
u64 *prev_stat, u64 *cur_stat);
|
2019-02-26 16:35:11 -08:00
|
|
|
void
|
|
|
|
ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
|
|
|
|
u64 *prev_stat, u64 *cur_stat);
|
2019-02-28 15:24:24 -08:00
|
|
|
enum ice_status
|
|
|
|
ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
|
2020-06-29 17:27:45 -07:00
|
|
|
struct ice_aqc_txsched_elem_data *buf);
|
2020-07-13 13:53:04 -07:00
|
|
|
enum ice_status
|
|
|
|
ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
|
|
|
|
struct ice_sq_cd *cd);
|
2020-09-17 13:13:39 -07:00
|
|
|
bool ice_fw_supports_lldp_fltr_ctrl(struct ice_hw *hw);
|
|
|
|
enum ice_status
|
|
|
|
ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add);
|
2021-03-25 15:35:12 -07:00
|
|
|
bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw);
|
2018-03-20 07:58:06 -07:00
|
|
|
#endif /* _ICE_COMMON_H_ */
|