License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2016-08-25 11:16:03 +02:00
|
|
|
/*
|
2023-06-28 12:36:08 +02:00
|
|
|
* Copyright IBM Corp. 2016, 2023
|
2016-08-25 11:16:03 +02:00
|
|
|
* Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
|
|
*
|
|
|
|
* Adjunct processor bus, queue related code.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define KMSG_COMPONENT "ap"
|
|
|
|
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
|
|
|
|
|
2025-06-12 14:36:59 +02:00
|
|
|
#include <linux/export.h>
|
2016-08-25 11:16:03 +02:00
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <asm/facility.h>
|
|
|
|
|
|
|
|
#include "ap_bus.h"
|
2018-11-26 15:50:04 +01:00
|
|
|
#include "ap_debug.h"
|
|
|
|
|
|
|
|
static void __ap_flush_queue(struct ap_queue *aq);
|
2016-11-09 15:00:23 +01:00
|
|
|
|
2023-03-10 17:46:49 +01:00
|
|
|
/*
|
|
|
|
* some AP queue helper functions
|
|
|
|
*/
|
|
|
|
|
2024-09-25 15:31:06 +02:00
|
|
|
static inline bool ap_q_supported_in_se(struct ap_queue *aq)
|
|
|
|
{
|
|
|
|
return aq->card->hwinfo.ep11 || aq->card->hwinfo.accel;
|
|
|
|
}
|
|
|
|
|
2023-03-10 17:46:49 +01:00
|
|
|
static inline bool ap_q_supports_bind(struct ap_queue *aq)
|
|
|
|
{
|
2023-11-04 10:04:31 +01:00
|
|
|
return aq->card->hwinfo.ep11 || aq->card->hwinfo.accel;
|
2023-03-10 17:46:49 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool ap_q_supports_assoc(struct ap_queue *aq)
|
|
|
|
{
|
2023-11-04 10:04:31 +01:00
|
|
|
return aq->card->hwinfo.ep11;
|
2023-03-10 17:46:49 +01:00
|
|
|
}
|
|
|
|
|
2023-09-12 10:08:51 +02:00
|
|
|
static inline bool ap_q_needs_bind(struct ap_queue *aq)
|
|
|
|
{
|
|
|
|
return ap_q_supports_bind(aq) && ap_sb_available();
|
|
|
|
}
|
|
|
|
|
2016-08-25 11:16:03 +02:00
|
|
|
/**
|
2021-08-25 10:55:02 +02:00
|
|
|
* ap_queue_enable_irq(): Enable interrupt support on this AP queue.
|
2021-09-07 07:27:11 +02:00
|
|
|
* @aq: The AP queue
|
2016-08-25 11:16:03 +02:00
|
|
|
* @ind: the notification indicator byte
|
|
|
|
*
|
|
|
|
* Enables interruption on AP queue via ap_aqic(). Based on the return
|
|
|
|
* value it waits a while and tests the AP queue if interrupts
|
|
|
|
* have been switched on using ap_test_queue().
|
|
|
|
*/
|
2021-08-25 10:55:02 +02:00
|
|
|
static int ap_queue_enable_irq(struct ap_queue *aq, void *ind)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
2023-02-17 12:05:36 +01:00
|
|
|
union ap_qirq_ctrl qirqctrl = { .value = 0 };
|
2016-08-25 11:16:03 +02:00
|
|
|
struct ap_queue_status status;
|
|
|
|
|
2016-11-09 15:00:23 +01:00
|
|
|
qirqctrl.ir = 1;
|
|
|
|
qirqctrl.isc = AP_ISC;
|
2022-07-22 19:02:49 -07:00
|
|
|
status = ap_aqic(aq->qid, qirqctrl, virt_to_phys(ind));
|
2023-02-15 15:08:15 +01:00
|
|
|
if (status.async)
|
|
|
|
return -EPERM;
|
2016-08-25 11:16:03 +02:00
|
|
|
switch (status.response_code) {
|
|
|
|
case AP_RESPONSE_NORMAL:
|
|
|
|
case AP_RESPONSE_OTHERWISE_CHANGED:
|
|
|
|
return 0;
|
|
|
|
case AP_RESPONSE_Q_NOT_AVAIL:
|
|
|
|
case AP_RESPONSE_DECONFIGURED:
|
|
|
|
case AP_RESPONSE_CHECKSTOPPED:
|
|
|
|
case AP_RESPONSE_INVALID_ADDRESS:
|
|
|
|
pr_err("Registering adapter interrupts for AP device %02x.%04x failed\n",
|
|
|
|
AP_QID_CARD(aq->qid),
|
|
|
|
AP_QID_QUEUE(aq->qid));
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
case AP_RESPONSE_RESET_IN_PROGRESS:
|
|
|
|
case AP_RESPONSE_BUSY:
|
|
|
|
default:
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* __ap_send(): Send message to adjunct processor queue.
|
|
|
|
* @qid: The AP queue number
|
|
|
|
* @psmid: The program supplied message identifier
|
|
|
|
* @msg: The message text
|
2023-02-14 17:13:18 +01:00
|
|
|
* @msglen: The message length
|
2016-08-25 11:16:03 +02:00
|
|
|
* @special: Special Bit
|
|
|
|
*
|
|
|
|
* Returns AP queue status structure.
|
|
|
|
* Condition code 1 on NQAP can't happen because the L bit is 1.
|
|
|
|
* Condition code 2 on NQAP also means the send is incomplete,
|
|
|
|
* because a segment boundary was reached. The NQAP is repeated.
|
|
|
|
*/
|
|
|
|
static inline struct ap_queue_status
|
2023-02-14 17:13:18 +01:00
|
|
|
__ap_send(ap_qid_t qid, unsigned long psmid, void *msg, size_t msglen,
|
2020-04-30 12:23:29 +02:00
|
|
|
int special)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
2020-04-30 12:23:29 +02:00
|
|
|
if (special)
|
2016-08-25 11:16:03 +02:00
|
|
|
qid |= 0x400000UL;
|
2023-02-14 17:13:18 +01:00
|
|
|
return ap_nqap(qid, psmid, msg, msglen);
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* State machine definitions and helpers */
|
|
|
|
|
2020-05-26 10:49:33 +02:00
|
|
|
static enum ap_sm_wait ap_sm_nop(struct ap_queue *aq)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
2020-05-26 10:49:33 +02:00
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ap_sm_recv(): Receive pending reply messages from an AP queue but do
|
|
|
|
* not change the state of the device.
|
|
|
|
* @aq: pointer to the AP queue
|
|
|
|
*
|
2020-05-26 10:49:33 +02:00
|
|
|
* Returns AP_SM_WAIT_NONE, AP_SM_WAIT_AGAIN, or AP_SM_WAIT_INTERRUPT
|
2016-08-25 11:16:03 +02:00
|
|
|
*/
|
|
|
|
static struct ap_queue_status ap_sm_recv(struct ap_queue *aq)
|
|
|
|
{
|
|
|
|
struct ap_queue_status status;
|
|
|
|
struct ap_message *ap_msg;
|
2021-06-01 08:27:29 +02:00
|
|
|
bool found = false;
|
2021-06-30 16:10:56 +02:00
|
|
|
size_t reslen;
|
|
|
|
unsigned long resgr0 = 0;
|
|
|
|
int parts = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* DQAP loop until response code and resgr0 indicate that
|
|
|
|
* the msg is totally received. As we use the very same buffer
|
|
|
|
* the msg is overwritten with each invocation. That's intended
|
|
|
|
* and the receiver of the msg is informed with a msg rc code
|
|
|
|
* of EMSGSIZE in such a case.
|
|
|
|
*/
|
|
|
|
do {
|
|
|
|
status = ap_dqap(aq->qid, &aq->reply->psmid,
|
|
|
|
aq->reply->msg, aq->reply->bufsize,
|
2023-02-14 17:13:18 +01:00
|
|
|
&aq->reply->len, &reslen, &resgr0);
|
2021-06-30 16:10:56 +02:00
|
|
|
parts++;
|
|
|
|
} while (status.response_code == 0xFF && resgr0 != 0);
|
2016-08-25 11:16:03 +02:00
|
|
|
|
|
|
|
switch (status.response_code) {
|
|
|
|
case AP_RESPONSE_NORMAL:
|
2024-02-09 16:14:23 +01:00
|
|
|
print_hex_dump_debug("aprpl: ", DUMP_PREFIX_ADDRESS, 16, 1,
|
|
|
|
aq->reply->msg, aq->reply->len, false);
|
2021-06-01 08:27:29 +02:00
|
|
|
aq->queue_count = max_t(int, 0, aq->queue_count - 1);
|
2021-10-14 09:58:24 +02:00
|
|
|
if (!status.queue_empty && !aq->queue_count)
|
|
|
|
aq->queue_count++;
|
2016-08-25 11:16:03 +02:00
|
|
|
if (aq->queue_count > 0)
|
|
|
|
mod_timer(&aq->timeout,
|
|
|
|
jiffies + aq->request_timeout);
|
|
|
|
list_for_each_entry(ap_msg, &aq->pendingq, list) {
|
|
|
|
if (ap_msg->psmid != aq->reply->psmid)
|
|
|
|
continue;
|
|
|
|
list_del_init(&ap_msg->list);
|
|
|
|
aq->pendingq_count--;
|
2021-06-30 16:10:56 +02:00
|
|
|
if (parts > 1) {
|
|
|
|
ap_msg->rc = -EMSGSIZE;
|
|
|
|
ap_msg->receive(aq, ap_msg, NULL);
|
|
|
|
} else {
|
|
|
|
ap_msg->receive(aq, ap_msg, aq->reply);
|
|
|
|
}
|
2021-06-01 08:27:29 +02:00
|
|
|
found = true;
|
2016-08-25 11:16:03 +02:00
|
|
|
break;
|
|
|
|
}
|
2021-06-01 08:27:29 +02:00
|
|
|
if (!found) {
|
2023-01-29 19:45:25 +01:00
|
|
|
AP_DBF_WARN("%s unassociated reply psmid=0x%016lx on 0x%02x.%04x\n",
|
2021-06-01 08:27:29 +02:00
|
|
|
__func__, aq->reply->psmid,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
}
|
2020-03-10 13:39:51 -07:00
|
|
|
fallthrough;
|
2016-08-25 11:16:03 +02:00
|
|
|
case AP_RESPONSE_NO_PENDING_REPLY:
|
|
|
|
if (!status.queue_empty || aq->queue_count <= 0)
|
|
|
|
break;
|
|
|
|
/* The card shouldn't forget requests but who knows. */
|
|
|
|
aq->queue_count = 0;
|
|
|
|
list_splice_init(&aq->pendingq, &aq->requestq);
|
|
|
|
aq->requestq_count += aq->pendingq_count;
|
2024-07-16 14:16:36 +02:00
|
|
|
pr_debug("queue 0x%02x.%04x rescheduled %d reqs (new req %d)\n",
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid),
|
2024-01-30 10:07:28 +01:00
|
|
|
aq->pendingq_count, aq->requestq_count);
|
2016-08-25 11:16:03 +02:00
|
|
|
aq->pendingq_count = 0;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ap_sm_read(): Receive pending reply messages from an AP queue.
|
|
|
|
* @aq: pointer to the AP queue
|
|
|
|
*
|
2020-05-26 10:49:33 +02:00
|
|
|
* Returns AP_SM_WAIT_NONE, AP_SM_WAIT_AGAIN, or AP_SM_WAIT_INTERRUPT
|
2016-08-25 11:16:03 +02:00
|
|
|
*/
|
2020-05-26 10:49:33 +02:00
|
|
|
static enum ap_sm_wait ap_sm_read(struct ap_queue *aq)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue_status status;
|
|
|
|
|
|
|
|
if (!aq->reply)
|
2020-05-26 10:49:33 +02:00
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
status = ap_sm_recv(aq);
|
2023-02-15 15:08:15 +01:00
|
|
|
if (status.async)
|
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
switch (status.response_code) {
|
|
|
|
case AP_RESPONSE_NORMAL:
|
|
|
|
if (aq->queue_count > 0) {
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = AP_SM_STATE_WORKING;
|
|
|
|
return AP_SM_WAIT_AGAIN;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = AP_SM_STATE_IDLE;
|
2023-10-23 15:42:21 +02:00
|
|
|
break;
|
2016-08-25 11:16:03 +02:00
|
|
|
case AP_RESPONSE_NO_PENDING_REPLY:
|
|
|
|
if (aq->queue_count > 0)
|
2023-10-23 14:50:11 +02:00
|
|
|
return status.irq_enabled ?
|
2022-10-21 15:41:00 +02:00
|
|
|
AP_SM_WAIT_INTERRUPT : AP_SM_WAIT_HIGH_TIMEOUT;
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = AP_SM_STATE_IDLE;
|
2023-10-23 15:42:21 +02:00
|
|
|
break;
|
2016-08-25 11:16:03 +02:00
|
|
|
default:
|
2020-07-02 11:22:01 +02:00
|
|
|
aq->dev_state = AP_DEV_STATE_ERROR;
|
2020-07-02 15:56:15 +02:00
|
|
|
aq->last_err_rc = status.response_code;
|
2020-10-08 08:43:09 +02:00
|
|
|
AP_DBF_WARN("%s RC 0x%02x on 0x%02x.%04x -> AP_DEV_STATE_ERROR\n",
|
2020-07-02 15:56:15 +02:00
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
2020-05-26 10:49:33 +02:00
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
2023-10-23 15:42:21 +02:00
|
|
|
/* Check and maybe enable irq support (again) on this queue */
|
|
|
|
if (!status.irq_enabled && status.queue_empty) {
|
|
|
|
void *lsi_ptr = ap_airq_ptr();
|
|
|
|
|
|
|
|
if (lsi_ptr && ap_queue_enable_irq(aq, lsi_ptr) == 0) {
|
|
|
|
aq->sm_state = AP_SM_STATE_SETIRQ_WAIT;
|
|
|
|
return AP_SM_WAIT_AGAIN;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ap_sm_write(): Send messages from the request queue to an AP queue.
|
|
|
|
* @aq: pointer to the AP queue
|
|
|
|
*
|
2020-05-26 10:49:33 +02:00
|
|
|
* Returns AP_SM_WAIT_NONE, AP_SM_WAIT_AGAIN, or AP_SM_WAIT_INTERRUPT
|
2016-08-25 11:16:03 +02:00
|
|
|
*/
|
2020-05-26 10:49:33 +02:00
|
|
|
static enum ap_sm_wait ap_sm_write(struct ap_queue *aq)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue_status status;
|
|
|
|
struct ap_message *ap_msg;
|
2020-09-29 16:07:22 +02:00
|
|
|
ap_qid_t qid = aq->qid;
|
2016-08-25 11:16:03 +02:00
|
|
|
|
|
|
|
if (aq->requestq_count <= 0)
|
2020-05-26 10:49:33 +02:00
|
|
|
return AP_SM_WAIT_NONE;
|
2021-10-15 12:00:22 +02:00
|
|
|
|
2016-08-25 11:16:03 +02:00
|
|
|
/* Start the next request on the queue. */
|
|
|
|
ap_msg = list_entry(aq->requestq.next, struct ap_message, list);
|
2024-02-09 16:14:23 +01:00
|
|
|
print_hex_dump_debug("apreq: ", DUMP_PREFIX_ADDRESS, 16, 1,
|
|
|
|
ap_msg->msg, ap_msg->len, false);
|
2020-09-29 16:07:22 +02:00
|
|
|
status = __ap_send(qid, ap_msg->psmid,
|
2020-04-30 12:23:29 +02:00
|
|
|
ap_msg->msg, ap_msg->len,
|
|
|
|
ap_msg->flags & AP_MSG_FLAG_SPECIAL);
|
2023-02-15 15:08:15 +01:00
|
|
|
if (status.async)
|
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
switch (status.response_code) {
|
|
|
|
case AP_RESPONSE_NORMAL:
|
2021-06-01 08:27:29 +02:00
|
|
|
aq->queue_count = max_t(int, 1, aq->queue_count + 1);
|
2016-08-25 11:16:03 +02:00
|
|
|
if (aq->queue_count == 1)
|
|
|
|
mod_timer(&aq->timeout, jiffies + aq->request_timeout);
|
|
|
|
list_move_tail(&ap_msg->list, &aq->pendingq);
|
|
|
|
aq->requestq_count--;
|
|
|
|
aq->pendingq_count++;
|
2023-11-04 10:04:31 +01:00
|
|
|
if (aq->queue_count < aq->card->hwinfo.qd) {
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = AP_SM_STATE_WORKING;
|
|
|
|
return AP_SM_WAIT_AGAIN;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
2020-03-10 13:39:51 -07:00
|
|
|
fallthrough;
|
2016-08-25 11:16:03 +02:00
|
|
|
case AP_RESPONSE_Q_FULL:
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = AP_SM_STATE_QUEUE_FULL;
|
2023-10-23 14:50:11 +02:00
|
|
|
return status.irq_enabled ?
|
2022-10-21 15:41:00 +02:00
|
|
|
AP_SM_WAIT_INTERRUPT : AP_SM_WAIT_HIGH_TIMEOUT;
|
2016-08-25 11:16:03 +02:00
|
|
|
case AP_RESPONSE_RESET_IN_PROGRESS:
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = AP_SM_STATE_RESET_WAIT;
|
2022-10-21 15:41:00 +02:00
|
|
|
return AP_SM_WAIT_LOW_TIMEOUT;
|
2020-08-04 09:27:47 +02:00
|
|
|
case AP_RESPONSE_INVALID_DOMAIN:
|
2021-10-15 12:00:22 +02:00
|
|
|
AP_DBF_WARN("%s RESPONSE_INVALID_DOMAIN on NQAP\n", __func__);
|
2020-08-04 09:27:47 +02:00
|
|
|
fallthrough;
|
2016-08-25 11:16:03 +02:00
|
|
|
case AP_RESPONSE_MESSAGE_TOO_BIG:
|
|
|
|
case AP_RESPONSE_REQ_FAC_NOT_INST:
|
|
|
|
list_del_init(&ap_msg->list);
|
|
|
|
aq->requestq_count--;
|
|
|
|
ap_msg->rc = -EINVAL;
|
|
|
|
ap_msg->receive(aq, ap_msg, NULL);
|
2020-05-26 10:49:33 +02:00
|
|
|
return AP_SM_WAIT_AGAIN;
|
2016-08-25 11:16:03 +02:00
|
|
|
default:
|
2020-07-02 11:22:01 +02:00
|
|
|
aq->dev_state = AP_DEV_STATE_ERROR;
|
2020-07-02 15:56:15 +02:00
|
|
|
aq->last_err_rc = status.response_code;
|
2020-10-08 08:43:09 +02:00
|
|
|
AP_DBF_WARN("%s RC 0x%02x on 0x%02x.%04x -> AP_DEV_STATE_ERROR\n",
|
2020-07-02 15:56:15 +02:00
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
2020-05-26 10:49:33 +02:00
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ap_sm_read_write(): Send and receive messages to/from an AP queue.
|
|
|
|
* @aq: pointer to the AP queue
|
|
|
|
*
|
2020-05-26 10:49:33 +02:00
|
|
|
* Returns AP_SM_WAIT_NONE, AP_SM_WAIT_AGAIN, or AP_SM_WAIT_INTERRUPT
|
2016-08-25 11:16:03 +02:00
|
|
|
*/
|
2020-05-26 10:49:33 +02:00
|
|
|
static enum ap_sm_wait ap_sm_read_write(struct ap_queue *aq)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
return min(ap_sm_read(aq), ap_sm_write(aq));
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ap_sm_reset(): Reset an AP queue.
|
2021-09-07 07:27:11 +02:00
|
|
|
* @aq: The AP queue
|
2016-08-25 11:16:03 +02:00
|
|
|
*
|
|
|
|
* Submit the Reset command to an AP queue.
|
|
|
|
*/
|
2020-05-26 10:49:33 +02:00
|
|
|
static enum ap_sm_wait ap_sm_reset(struct ap_queue *aq)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue_status status;
|
|
|
|
|
2023-03-10 17:46:49 +01:00
|
|
|
status = ap_rapq(aq->qid, aq->rapq_fbit);
|
2023-02-15 15:08:15 +01:00
|
|
|
if (status.async)
|
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
switch (status.response_code) {
|
|
|
|
case AP_RESPONSE_NORMAL:
|
|
|
|
case AP_RESPONSE_RESET_IN_PROGRESS:
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = AP_SM_STATE_RESET_WAIT;
|
2023-03-10 17:46:49 +01:00
|
|
|
aq->rapq_fbit = 0;
|
2022-10-21 15:41:00 +02:00
|
|
|
return AP_SM_WAIT_LOW_TIMEOUT;
|
2016-08-25 11:16:03 +02:00
|
|
|
default:
|
2020-07-02 11:22:01 +02:00
|
|
|
aq->dev_state = AP_DEV_STATE_ERROR;
|
2020-07-02 15:56:15 +02:00
|
|
|
aq->last_err_rc = status.response_code;
|
2020-10-08 08:43:09 +02:00
|
|
|
AP_DBF_WARN("%s RC 0x%02x on 0x%02x.%04x -> AP_DEV_STATE_ERROR\n",
|
2020-07-02 15:56:15 +02:00
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
2020-05-26 10:49:33 +02:00
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ap_sm_reset_wait(): Test queue for completion of the reset operation
|
|
|
|
* @aq: pointer to the AP queue
|
|
|
|
*
|
|
|
|
* Returns AP_POLL_IMMEDIATELY, AP_POLL_AFTER_TIMEROUT or 0.
|
|
|
|
*/
|
2020-05-26 10:49:33 +02:00
|
|
|
static enum ap_sm_wait ap_sm_reset_wait(struct ap_queue *aq)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue_status status;
|
2023-11-09 11:24:20 +01:00
|
|
|
struct ap_tapq_hwinfo hwinfo;
|
2016-08-25 11:16:03 +02:00
|
|
|
void *lsi_ptr;
|
|
|
|
|
2023-11-09 11:24:20 +01:00
|
|
|
/* Get the status with TAPQ */
|
|
|
|
status = ap_test_queue(aq->qid, 1, &hwinfo);
|
2016-08-25 11:16:03 +02:00
|
|
|
|
|
|
|
switch (status.response_code) {
|
|
|
|
case AP_RESPONSE_NORMAL:
|
2023-11-09 11:24:20 +01:00
|
|
|
aq->se_bstate = hwinfo.bs;
|
2016-08-25 11:16:03 +02:00
|
|
|
lsi_ptr = ap_airq_ptr();
|
2021-08-25 10:55:02 +02:00
|
|
|
if (lsi_ptr && ap_queue_enable_irq(aq, lsi_ptr) == 0)
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = AP_SM_STATE_SETIRQ_WAIT;
|
2016-08-25 11:16:03 +02:00
|
|
|
else
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = (aq->queue_count > 0) ?
|
|
|
|
AP_SM_STATE_WORKING : AP_SM_STATE_IDLE;
|
|
|
|
return AP_SM_WAIT_AGAIN;
|
2016-08-25 11:16:03 +02:00
|
|
|
case AP_RESPONSE_BUSY:
|
|
|
|
case AP_RESPONSE_RESET_IN_PROGRESS:
|
2022-10-21 15:41:00 +02:00
|
|
|
return AP_SM_WAIT_LOW_TIMEOUT;
|
2016-08-25 11:16:03 +02:00
|
|
|
case AP_RESPONSE_Q_NOT_AVAIL:
|
|
|
|
case AP_RESPONSE_DECONFIGURED:
|
|
|
|
case AP_RESPONSE_CHECKSTOPPED:
|
|
|
|
default:
|
2020-07-02 11:22:01 +02:00
|
|
|
aq->dev_state = AP_DEV_STATE_ERROR;
|
2020-07-02 15:56:15 +02:00
|
|
|
aq->last_err_rc = status.response_code;
|
2020-10-08 08:43:09 +02:00
|
|
|
AP_DBF_WARN("%s RC 0x%02x on 0x%02x.%04x -> AP_DEV_STATE_ERROR\n",
|
2020-07-02 15:56:15 +02:00
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
2020-05-26 10:49:33 +02:00
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ap_sm_setirq_wait(): Test queue for completion of the irq enablement
|
|
|
|
* @aq: pointer to the AP queue
|
|
|
|
*
|
|
|
|
* Returns AP_POLL_IMMEDIATELY, AP_POLL_AFTER_TIMEROUT or 0.
|
|
|
|
*/
|
2020-05-26 10:49:33 +02:00
|
|
|
static enum ap_sm_wait ap_sm_setirq_wait(struct ap_queue *aq)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue_status status;
|
|
|
|
|
|
|
|
if (aq->queue_count > 0 && aq->reply)
|
|
|
|
/* Try to read a completed message and get the status */
|
|
|
|
status = ap_sm_recv(aq);
|
|
|
|
else
|
|
|
|
/* Get the status with TAPQ */
|
|
|
|
status = ap_tapq(aq->qid, NULL);
|
|
|
|
|
2016-11-08 07:09:13 +01:00
|
|
|
if (status.irq_enabled == 1) {
|
2016-08-25 11:16:03 +02:00
|
|
|
/* Irqs are now enabled */
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = (aq->queue_count > 0) ?
|
|
|
|
AP_SM_STATE_WORKING : AP_SM_STATE_IDLE;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
switch (status.response_code) {
|
|
|
|
case AP_RESPONSE_NORMAL:
|
|
|
|
if (aq->queue_count > 0)
|
2020-05-26 10:49:33 +02:00
|
|
|
return AP_SM_WAIT_AGAIN;
|
2020-03-10 13:39:51 -07:00
|
|
|
fallthrough;
|
2016-08-25 11:16:03 +02:00
|
|
|
case AP_RESPONSE_NO_PENDING_REPLY:
|
2022-10-21 15:41:00 +02:00
|
|
|
return AP_SM_WAIT_LOW_TIMEOUT;
|
2016-08-25 11:16:03 +02:00
|
|
|
default:
|
2020-07-02 11:22:01 +02:00
|
|
|
aq->dev_state = AP_DEV_STATE_ERROR;
|
2020-07-02 15:56:15 +02:00
|
|
|
aq->last_err_rc = status.response_code;
|
2020-10-08 08:43:09 +02:00
|
|
|
AP_DBF_WARN("%s RC 0x%02x on 0x%02x.%04x -> AP_DEV_STATE_ERROR\n",
|
2020-07-02 15:56:15 +02:00
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
2020-05-26 10:49:33 +02:00
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-03-10 17:46:49 +01:00
|
|
|
/**
|
|
|
|
* ap_sm_assoc_wait(): Test queue for completion of a pending
|
|
|
|
* association request.
|
|
|
|
* @aq: pointer to the AP queue
|
|
|
|
*/
|
|
|
|
static enum ap_sm_wait ap_sm_assoc_wait(struct ap_queue *aq)
|
|
|
|
{
|
|
|
|
struct ap_queue_status status;
|
2023-11-04 10:04:31 +01:00
|
|
|
struct ap_tapq_hwinfo hwinfo;
|
2023-03-10 17:46:49 +01:00
|
|
|
|
2023-11-04 10:04:31 +01:00
|
|
|
status = ap_test_queue(aq->qid, 1, &hwinfo);
|
2023-03-10 17:46:49 +01:00
|
|
|
/* handle asynchronous error on this queue */
|
|
|
|
if (status.async && status.response_code) {
|
|
|
|
aq->dev_state = AP_DEV_STATE_ERROR;
|
|
|
|
aq->last_err_rc = status.response_code;
|
|
|
|
AP_DBF_WARN("%s asynch RC 0x%02x on 0x%02x.%04x -> AP_DEV_STATE_ERROR\n",
|
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
return AP_SM_WAIT_NONE;
|
|
|
|
}
|
|
|
|
if (status.response_code > AP_RESPONSE_BUSY) {
|
|
|
|
aq->dev_state = AP_DEV_STATE_ERROR;
|
|
|
|
aq->last_err_rc = status.response_code;
|
|
|
|
AP_DBF_WARN("%s RC 0x%02x on 0x%02x.%04x -> AP_DEV_STATE_ERROR\n",
|
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
return AP_SM_WAIT_NONE;
|
|
|
|
}
|
|
|
|
|
2023-11-09 11:24:20 +01:00
|
|
|
/* update queue's SE bind state */
|
|
|
|
aq->se_bstate = hwinfo.bs;
|
|
|
|
|
2023-03-10 17:46:49 +01:00
|
|
|
/* check bs bits */
|
2023-11-04 10:04:31 +01:00
|
|
|
switch (hwinfo.bs) {
|
2023-03-10 17:46:49 +01:00
|
|
|
case AP_BS_Q_USABLE:
|
|
|
|
/* association is through */
|
|
|
|
aq->sm_state = AP_SM_STATE_IDLE;
|
2024-07-16 14:16:36 +02:00
|
|
|
pr_debug("queue 0x%02x.%04x associated with %u\n",
|
|
|
|
AP_QID_CARD(aq->qid),
|
2024-01-30 10:07:28 +01:00
|
|
|
AP_QID_QUEUE(aq->qid), aq->assoc_idx);
|
2023-03-10 17:46:49 +01:00
|
|
|
return AP_SM_WAIT_NONE;
|
|
|
|
case AP_BS_Q_USABLE_NO_SECURE_KEY:
|
|
|
|
/* association still pending */
|
|
|
|
return AP_SM_WAIT_LOW_TIMEOUT;
|
|
|
|
default:
|
|
|
|
/* reset from 'outside' happened or no idea at all */
|
|
|
|
aq->assoc_idx = ASSOC_IDX_INVALID;
|
|
|
|
aq->dev_state = AP_DEV_STATE_ERROR;
|
|
|
|
aq->last_err_rc = status.response_code;
|
|
|
|
AP_DBF_WARN("%s bs 0x%02x on 0x%02x.%04x -> AP_DEV_STATE_ERROR\n",
|
2023-11-04 10:04:31 +01:00
|
|
|
__func__, hwinfo.bs,
|
2023-03-10 17:46:49 +01:00
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
return AP_SM_WAIT_NONE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-08-25 11:16:03 +02:00
|
|
|
/*
|
|
|
|
* AP state machine jump table
|
|
|
|
*/
|
2020-05-26 10:49:33 +02:00
|
|
|
static ap_func_t *ap_jumptable[NR_AP_SM_STATES][NR_AP_SM_EVENTS] = {
|
|
|
|
[AP_SM_STATE_RESET_START] = {
|
|
|
|
[AP_SM_EVENT_POLL] = ap_sm_reset,
|
|
|
|
[AP_SM_EVENT_TIMEOUT] = ap_sm_nop,
|
2016-08-25 11:16:03 +02:00
|
|
|
},
|
2020-05-26 10:49:33 +02:00
|
|
|
[AP_SM_STATE_RESET_WAIT] = {
|
|
|
|
[AP_SM_EVENT_POLL] = ap_sm_reset_wait,
|
|
|
|
[AP_SM_EVENT_TIMEOUT] = ap_sm_nop,
|
2016-08-25 11:16:03 +02:00
|
|
|
},
|
2020-05-26 10:49:33 +02:00
|
|
|
[AP_SM_STATE_SETIRQ_WAIT] = {
|
|
|
|
[AP_SM_EVENT_POLL] = ap_sm_setirq_wait,
|
|
|
|
[AP_SM_EVENT_TIMEOUT] = ap_sm_nop,
|
2016-08-25 11:16:03 +02:00
|
|
|
},
|
2020-05-26 10:49:33 +02:00
|
|
|
[AP_SM_STATE_IDLE] = {
|
|
|
|
[AP_SM_EVENT_POLL] = ap_sm_write,
|
|
|
|
[AP_SM_EVENT_TIMEOUT] = ap_sm_nop,
|
2016-08-25 11:16:03 +02:00
|
|
|
},
|
2020-05-26 10:49:33 +02:00
|
|
|
[AP_SM_STATE_WORKING] = {
|
|
|
|
[AP_SM_EVENT_POLL] = ap_sm_read_write,
|
|
|
|
[AP_SM_EVENT_TIMEOUT] = ap_sm_reset,
|
2016-08-25 11:16:03 +02:00
|
|
|
},
|
2020-05-26 10:49:33 +02:00
|
|
|
[AP_SM_STATE_QUEUE_FULL] = {
|
|
|
|
[AP_SM_EVENT_POLL] = ap_sm_read,
|
|
|
|
[AP_SM_EVENT_TIMEOUT] = ap_sm_reset,
|
2016-08-25 11:16:03 +02:00
|
|
|
},
|
2023-03-10 17:46:49 +01:00
|
|
|
[AP_SM_STATE_ASSOC_WAIT] = {
|
|
|
|
[AP_SM_EVENT_POLL] = ap_sm_assoc_wait,
|
|
|
|
[AP_SM_EVENT_TIMEOUT] = ap_sm_reset,
|
|
|
|
},
|
2016-08-25 11:16:03 +02:00
|
|
|
};
|
|
|
|
|
2020-05-26 10:49:33 +02:00
|
|
|
enum ap_sm_wait ap_sm_event(struct ap_queue *aq, enum ap_sm_event event)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
2021-11-17 15:38:39 +01:00
|
|
|
if (aq->config && !aq->chkstop &&
|
|
|
|
aq->dev_state > AP_DEV_STATE_UNINITIATED)
|
2020-07-02 11:22:01 +02:00
|
|
|
return ap_jumptable[aq->sm_state][event](aq);
|
|
|
|
else
|
|
|
|
return AP_SM_WAIT_NONE;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
|
2020-05-26 10:49:33 +02:00
|
|
|
enum ap_sm_wait ap_sm_event_loop(struct ap_queue *aq, enum ap_sm_event event)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
2020-05-26 10:49:33 +02:00
|
|
|
enum ap_sm_wait wait;
|
2016-08-25 11:16:03 +02:00
|
|
|
|
2020-05-26 10:49:33 +02:00
|
|
|
while ((wait = ap_sm_event(aq, event)) == AP_SM_WAIT_AGAIN)
|
2016-08-25 11:16:03 +02:00
|
|
|
;
|
|
|
|
return wait;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* AP queue related attributes.
|
|
|
|
*/
|
2018-08-17 12:36:01 +02:00
|
|
|
static ssize_t request_count_show(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
char *buf)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
2020-07-02 11:22:01 +02:00
|
|
|
bool valid = false;
|
2019-12-20 16:02:54 +01:00
|
|
|
u64 req_cnt;
|
2016-08-25 11:16:03 +02:00
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
2020-07-02 11:22:01 +02:00
|
|
|
if (aq->dev_state > AP_DEV_STATE_UNINITIATED) {
|
|
|
|
req_cnt = aq->total_request_count;
|
|
|
|
valid = true;
|
|
|
|
}
|
2016-08-25 11:16:03 +02:00
|
|
|
spin_unlock_bh(&aq->lock);
|
2020-07-02 11:22:01 +02:00
|
|
|
|
|
|
|
if (valid)
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "%llu\n", req_cnt);
|
2020-07-02 11:22:01 +02:00
|
|
|
else
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "-\n");
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
|
2018-08-17 12:36:01 +02:00
|
|
|
static ssize_t request_count_store(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
const char *buf, size_t count)
|
2016-11-15 09:05:00 +01:00
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
aq->total_request_count = 0;
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2018-08-17 12:36:01 +02:00
|
|
|
static DEVICE_ATTR_RW(request_count);
|
2016-08-25 11:16:03 +02:00
|
|
|
|
2018-08-17 12:36:01 +02:00
|
|
|
static ssize_t requestq_count_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
unsigned int reqq_cnt = 0;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
2020-07-02 11:22:01 +02:00
|
|
|
if (aq->dev_state > AP_DEV_STATE_UNINITIATED)
|
|
|
|
reqq_cnt = aq->requestq_count;
|
2016-08-25 11:16:03 +02:00
|
|
|
spin_unlock_bh(&aq->lock);
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "%d\n", reqq_cnt);
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
|
2018-08-17 12:36:01 +02:00
|
|
|
static DEVICE_ATTR_RO(requestq_count);
|
2016-08-25 11:16:03 +02:00
|
|
|
|
2018-08-17 12:36:01 +02:00
|
|
|
static ssize_t pendingq_count_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
unsigned int penq_cnt = 0;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
2020-07-02 11:22:01 +02:00
|
|
|
if (aq->dev_state > AP_DEV_STATE_UNINITIATED)
|
|
|
|
penq_cnt = aq->pendingq_count;
|
2016-08-25 11:16:03 +02:00
|
|
|
spin_unlock_bh(&aq->lock);
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "%d\n", penq_cnt);
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
|
2018-08-17 12:36:01 +02:00
|
|
|
static DEVICE_ATTR_RO(pendingq_count);
|
2016-08-25 11:16:03 +02:00
|
|
|
|
2018-08-17 12:36:01 +02:00
|
|
|
static ssize_t reset_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
int rc = 0;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
2020-05-26 10:49:33 +02:00
|
|
|
switch (aq->sm_state) {
|
|
|
|
case AP_SM_STATE_RESET_START:
|
|
|
|
case AP_SM_STATE_RESET_WAIT:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "Reset in progress.\n");
|
2016-08-25 11:16:03 +02:00
|
|
|
break;
|
2020-05-26 10:49:33 +02:00
|
|
|
case AP_SM_STATE_WORKING:
|
|
|
|
case AP_SM_STATE_QUEUE_FULL:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "Reset Timer armed.\n");
|
2016-08-25 11:16:03 +02:00
|
|
|
break;
|
|
|
|
default:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "No Reset Timer set.\n");
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2018-11-26 15:50:04 +01:00
|
|
|
static ssize_t reset_store(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
const char *buf, size_t count)
|
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
__ap_flush_queue(aq);
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = AP_SM_STATE_RESET_START;
|
|
|
|
ap_wait(ap_sm_event(aq, AP_SM_EVENT_POLL));
|
2018-11-26 15:50:04 +01:00
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
|
2021-10-15 12:00:22 +02:00
|
|
|
AP_DBF_INFO("%s reset queue=%02x.%04x triggered by user\n",
|
|
|
|
__func__, AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
2018-11-26 15:50:04 +01:00
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR_RW(reset);
|
2016-08-25 11:16:03 +02:00
|
|
|
|
2018-08-17 12:36:01 +02:00
|
|
|
static ssize_t interrupt_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
2023-10-23 14:50:11 +02:00
|
|
|
struct ap_queue_status status;
|
2016-08-25 11:16:03 +02:00
|
|
|
int rc = 0;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
2023-10-23 14:50:11 +02:00
|
|
|
if (aq->sm_state == AP_SM_STATE_SETIRQ_WAIT) {
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "Enable Interrupt pending.\n");
|
2023-10-23 14:50:11 +02:00
|
|
|
} else {
|
|
|
|
status = ap_tapq(aq->qid, NULL);
|
|
|
|
if (status.irq_enabled)
|
|
|
|
rc = sysfs_emit(buf, "Interrupts enabled.\n");
|
|
|
|
else
|
|
|
|
rc = sysfs_emit(buf, "Interrupts disabled.\n");
|
|
|
|
}
|
2016-08-25 11:16:03 +02:00
|
|
|
spin_unlock_bh(&aq->lock);
|
2023-10-23 14:50:11 +02:00
|
|
|
|
2016-08-25 11:16:03 +02:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2018-08-17 12:36:01 +02:00
|
|
|
static DEVICE_ATTR_RO(interrupt);
|
2016-08-25 11:16:03 +02:00
|
|
|
|
2020-07-02 16:57:00 +02:00
|
|
|
static ssize_t config_show(struct device *dev,
|
2022-04-04 17:12:37 +02:00
|
|
|
struct device_attribute *attr, char *buf)
|
2020-07-02 16:57:00 +02:00
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "%d\n", aq->config ? 1 : 0);
|
2020-07-02 16:57:00 +02:00
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR_RO(config);
|
|
|
|
|
2021-11-17 15:38:39 +01:00
|
|
|
static ssize_t chkstop_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "%d\n", aq->chkstop ? 1 : 0);
|
2021-11-17 15:38:39 +01:00
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR_RO(chkstop);
|
|
|
|
|
2022-09-07 17:25:45 +02:00
|
|
|
static ssize_t ap_functions_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
struct ap_queue_status status;
|
2023-11-04 10:04:31 +01:00
|
|
|
struct ap_tapq_hwinfo hwinfo;
|
2022-09-07 17:25:45 +02:00
|
|
|
|
2023-11-04 10:04:31 +01:00
|
|
|
status = ap_test_queue(aq->qid, 1, &hwinfo);
|
2022-09-07 17:25:45 +02:00
|
|
|
if (status.response_code > AP_RESPONSE_BUSY) {
|
2024-07-16 14:16:36 +02:00
|
|
|
pr_debug("RC 0x%02x on tapq(0x%02x.%04x)\n",
|
|
|
|
status.response_code,
|
2024-01-30 10:07:28 +01:00
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
2022-09-07 17:25:45 +02:00
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
2023-11-04 10:04:31 +01:00
|
|
|
return sysfs_emit(buf, "0x%08X\n", hwinfo.fac);
|
2022-09-07 17:25:45 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR_RO(ap_functions);
|
|
|
|
|
2024-02-27 16:49:33 +01:00
|
|
|
#ifdef CONFIG_AP_DEBUG
|
2020-07-02 11:22:01 +02:00
|
|
|
static ssize_t states_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
int rc = 0;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
/* queue device state */
|
|
|
|
switch (aq->dev_state) {
|
|
|
|
case AP_DEV_STATE_UNINITIATED:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "UNINITIATED\n");
|
2020-07-02 11:22:01 +02:00
|
|
|
break;
|
|
|
|
case AP_DEV_STATE_OPERATING:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "OPERATING");
|
2020-07-02 11:22:01 +02:00
|
|
|
break;
|
|
|
|
case AP_DEV_STATE_SHUTDOWN:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "SHUTDOWN");
|
2020-07-02 11:22:01 +02:00
|
|
|
break;
|
|
|
|
case AP_DEV_STATE_ERROR:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "ERROR");
|
2020-07-02 11:22:01 +02:00
|
|
|
break;
|
|
|
|
default:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc = sysfs_emit(buf, "UNKNOWN");
|
2020-07-02 11:22:01 +02:00
|
|
|
}
|
|
|
|
/* state machine state */
|
|
|
|
if (aq->dev_state) {
|
|
|
|
switch (aq->sm_state) {
|
|
|
|
case AP_SM_STATE_RESET_START:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc += sysfs_emit_at(buf, rc, " [RESET_START]\n");
|
2020-07-02 11:22:01 +02:00
|
|
|
break;
|
|
|
|
case AP_SM_STATE_RESET_WAIT:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc += sysfs_emit_at(buf, rc, " [RESET_WAIT]\n");
|
2020-07-02 11:22:01 +02:00
|
|
|
break;
|
|
|
|
case AP_SM_STATE_SETIRQ_WAIT:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc += sysfs_emit_at(buf, rc, " [SETIRQ_WAIT]\n");
|
2020-07-02 11:22:01 +02:00
|
|
|
break;
|
|
|
|
case AP_SM_STATE_IDLE:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc += sysfs_emit_at(buf, rc, " [IDLE]\n");
|
2020-07-02 11:22:01 +02:00
|
|
|
break;
|
|
|
|
case AP_SM_STATE_WORKING:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc += sysfs_emit_at(buf, rc, " [WORKING]\n");
|
2020-07-02 11:22:01 +02:00
|
|
|
break;
|
|
|
|
case AP_SM_STATE_QUEUE_FULL:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc += sysfs_emit_at(buf, rc, " [FULL]\n");
|
2020-07-02 11:22:01 +02:00
|
|
|
break;
|
2023-03-10 17:46:49 +01:00
|
|
|
case AP_SM_STATE_ASSOC_WAIT:
|
|
|
|
rc += sysfs_emit_at(buf, rc, " [ASSOC_WAIT]\n");
|
|
|
|
break;
|
2020-07-02 11:22:01 +02:00
|
|
|
default:
|
2023-02-06 10:53:08 +01:00
|
|
|
rc += sysfs_emit_at(buf, rc, " [UNKNOWN]\n");
|
2020-07-02 11:22:01 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(states);
|
2020-07-02 15:56:15 +02:00
|
|
|
|
|
|
|
static ssize_t last_err_rc_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
rc = aq->last_err_rc;
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
|
|
|
|
switch (rc) {
|
|
|
|
case AP_RESPONSE_NORMAL:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "NORMAL\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_Q_NOT_AVAIL:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "Q_NOT_AVAIL\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_RESET_IN_PROGRESS:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "RESET_IN_PROGRESS\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_DECONFIGURED:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "DECONFIGURED\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_CHECKSTOPPED:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "CHECKSTOPPED\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_BUSY:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "BUSY\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_INVALID_ADDRESS:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "INVALID_ADDRESS\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_OTHERWISE_CHANGED:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "OTHERWISE_CHANGED\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_Q_FULL:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "Q_FULL/NO_PENDING_REPLY\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_INDEX_TOO_BIG:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "INDEX_TOO_BIG\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_NO_FIRST_PART:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "NO_FIRST_PART\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_MESSAGE_TOO_BIG:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "MESSAGE_TOO_BIG\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
case AP_RESPONSE_REQ_FAC_NOT_INST:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "REQ_FAC_NOT_INST\n");
|
2020-07-02 15:56:15 +02:00
|
|
|
default:
|
2023-02-06 10:53:08 +01:00
|
|
|
return sysfs_emit(buf, "response code %d\n", rc);
|
2020-07-02 15:56:15 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(last_err_rc);
|
2020-07-02 11:22:01 +02:00
|
|
|
#endif
|
|
|
|
|
2016-08-25 11:16:03 +02:00
|
|
|
static struct attribute *ap_queue_dev_attrs[] = {
|
|
|
|
&dev_attr_request_count.attr,
|
|
|
|
&dev_attr_requestq_count.attr,
|
|
|
|
&dev_attr_pendingq_count.attr,
|
|
|
|
&dev_attr_reset.attr,
|
|
|
|
&dev_attr_interrupt.attr,
|
2020-07-02 16:57:00 +02:00
|
|
|
&dev_attr_config.attr,
|
2021-11-17 15:38:39 +01:00
|
|
|
&dev_attr_chkstop.attr,
|
2022-09-07 17:25:45 +02:00
|
|
|
&dev_attr_ap_functions.attr,
|
2024-02-27 16:49:33 +01:00
|
|
|
#ifdef CONFIG_AP_DEBUG
|
2020-07-02 11:22:01 +02:00
|
|
|
&dev_attr_states.attr,
|
2020-07-02 15:56:15 +02:00
|
|
|
&dev_attr_last_err_rc.attr,
|
2020-07-02 11:22:01 +02:00
|
|
|
#endif
|
2016-08-25 11:16:03 +02:00
|
|
|
NULL
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct attribute_group ap_queue_dev_attr_group = {
|
|
|
|
.attrs = ap_queue_dev_attrs
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct attribute_group *ap_queue_dev_attr_groups[] = {
|
|
|
|
&ap_queue_dev_attr_group,
|
|
|
|
NULL
|
|
|
|
};
|
|
|
|
|
2016-12-15 11:28:52 +01:00
|
|
|
static struct device_type ap_queue_type = {
|
2016-08-25 11:16:03 +02:00
|
|
|
.name = "ap_queue",
|
|
|
|
.groups = ap_queue_dev_attr_groups,
|
|
|
|
};
|
|
|
|
|
2023-03-10 17:46:49 +01:00
|
|
|
static ssize_t se_bind_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
struct ap_queue_status status;
|
2023-11-04 10:04:31 +01:00
|
|
|
struct ap_tapq_hwinfo hwinfo;
|
2023-03-10 17:46:49 +01:00
|
|
|
|
|
|
|
if (!ap_q_supports_bind(aq))
|
|
|
|
return sysfs_emit(buf, "-\n");
|
|
|
|
|
2023-11-04 10:04:31 +01:00
|
|
|
status = ap_test_queue(aq->qid, 1, &hwinfo);
|
2023-03-10 17:46:49 +01:00
|
|
|
if (status.response_code > AP_RESPONSE_BUSY) {
|
2024-07-16 14:16:36 +02:00
|
|
|
pr_debug("RC 0x%02x on tapq(0x%02x.%04x)\n",
|
|
|
|
status.response_code,
|
2024-01-30 10:07:28 +01:00
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
2023-03-10 17:46:49 +01:00
|
|
|
return -EIO;
|
|
|
|
}
|
2023-11-09 11:24:20 +01:00
|
|
|
|
|
|
|
/* update queue's SE bind state */
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
aq->se_bstate = hwinfo.bs;
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
|
2023-11-04 10:04:31 +01:00
|
|
|
switch (hwinfo.bs) {
|
2023-03-10 17:46:49 +01:00
|
|
|
case AP_BS_Q_USABLE:
|
|
|
|
case AP_BS_Q_USABLE_NO_SECURE_KEY:
|
|
|
|
return sysfs_emit(buf, "bound\n");
|
|
|
|
default:
|
|
|
|
return sysfs_emit(buf, "unbound\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t se_bind_store(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
const char *buf, size_t count)
|
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
struct ap_queue_status status;
|
2023-11-09 11:24:20 +01:00
|
|
|
struct ap_tapq_hwinfo hwinfo;
|
2023-03-10 17:46:49 +01:00
|
|
|
bool value;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
if (!ap_q_supports_bind(aq))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* only 0 (unbind) and 1 (bind) allowed */
|
|
|
|
rc = kstrtobool(buf, &value);
|
|
|
|
if (rc)
|
|
|
|
return rc;
|
|
|
|
|
2023-11-09 11:24:20 +01:00
|
|
|
if (!value) {
|
|
|
|
/* Unbind. Set F bit arg and trigger RAPQ */
|
2023-03-10 17:46:49 +01:00
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
__ap_flush_queue(aq);
|
|
|
|
aq->rapq_fbit = 1;
|
2023-11-09 11:24:20 +01:00
|
|
|
_ap_queue_init_state(aq);
|
|
|
|
rc = count;
|
|
|
|
goto out;
|
2023-03-10 17:46:49 +01:00
|
|
|
}
|
|
|
|
|
2023-11-09 11:24:20 +01:00
|
|
|
/* Bind. Check current SE bind state */
|
|
|
|
status = ap_test_queue(aq->qid, 1, &hwinfo);
|
|
|
|
if (status.response_code) {
|
|
|
|
AP_DBF_WARN("%s RC 0x%02x on tapq(0x%02x.%04x)\n",
|
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Update BS state */
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
aq->se_bstate = hwinfo.bs;
|
|
|
|
if (hwinfo.bs != AP_BS_Q_AVAIL_FOR_BINDING) {
|
|
|
|
AP_DBF_WARN("%s bind attempt with bs %d on queue 0x%02x.%04x\n",
|
|
|
|
__func__, hwinfo.bs,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
rc = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check SM state */
|
|
|
|
if (aq->sm_state < AP_SM_STATE_IDLE) {
|
|
|
|
rc = -EBUSY;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* invoke BAPQ */
|
|
|
|
status = ap_bapq(aq->qid);
|
|
|
|
if (status.response_code) {
|
|
|
|
AP_DBF_WARN("%s RC 0x%02x on bapq(0x%02x.%04x)\n",
|
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
rc = -EIO;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
aq->assoc_idx = ASSOC_IDX_INVALID;
|
|
|
|
|
|
|
|
/* verify SE bind state */
|
|
|
|
status = ap_test_queue(aq->qid, 1, &hwinfo);
|
|
|
|
if (status.response_code) {
|
|
|
|
AP_DBF_WARN("%s RC 0x%02x on tapq(0x%02x.%04x)\n",
|
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
rc = -EIO;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
aq->se_bstate = hwinfo.bs;
|
|
|
|
if (!(hwinfo.bs == AP_BS_Q_USABLE ||
|
|
|
|
hwinfo.bs == AP_BS_Q_USABLE_NO_SECURE_KEY)) {
|
|
|
|
AP_DBF_WARN("%s BAPQ success, but bs shows %d on queue 0x%02x.%04x\n",
|
|
|
|
__func__, hwinfo.bs,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
rc = -EIO;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* SE bind was successful */
|
|
|
|
AP_DBF_INFO("%s bapq(0x%02x.%04x) success\n", __func__,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
rc = count;
|
|
|
|
|
|
|
|
out:
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
return rc;
|
2023-03-10 17:46:49 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR_RW(se_bind);
|
|
|
|
|
|
|
|
static ssize_t se_associate_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
struct ap_queue_status status;
|
2023-11-04 10:04:31 +01:00
|
|
|
struct ap_tapq_hwinfo hwinfo;
|
2023-03-10 17:46:49 +01:00
|
|
|
|
|
|
|
if (!ap_q_supports_assoc(aq))
|
|
|
|
return sysfs_emit(buf, "-\n");
|
|
|
|
|
2023-11-04 10:04:31 +01:00
|
|
|
status = ap_test_queue(aq->qid, 1, &hwinfo);
|
2023-03-10 17:46:49 +01:00
|
|
|
if (status.response_code > AP_RESPONSE_BUSY) {
|
2024-07-16 14:16:36 +02:00
|
|
|
pr_debug("RC 0x%02x on tapq(0x%02x.%04x)\n",
|
|
|
|
status.response_code,
|
2024-01-30 10:07:28 +01:00
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
2023-03-10 17:46:49 +01:00
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
2023-11-09 11:24:20 +01:00
|
|
|
/* update queue's SE bind state */
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
aq->se_bstate = hwinfo.bs;
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
|
2023-11-04 10:04:31 +01:00
|
|
|
switch (hwinfo.bs) {
|
2023-03-10 17:46:49 +01:00
|
|
|
case AP_BS_Q_USABLE:
|
|
|
|
if (aq->assoc_idx == ASSOC_IDX_INVALID) {
|
|
|
|
AP_DBF_WARN("%s AP_BS_Q_USABLE but invalid assoc_idx\n", __func__);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
return sysfs_emit(buf, "associated %u\n", aq->assoc_idx);
|
|
|
|
case AP_BS_Q_USABLE_NO_SECURE_KEY:
|
|
|
|
if (aq->assoc_idx != ASSOC_IDX_INVALID)
|
|
|
|
return sysfs_emit(buf, "association pending\n");
|
|
|
|
fallthrough;
|
|
|
|
default:
|
|
|
|
return sysfs_emit(buf, "unassociated\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t se_associate_store(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
const char *buf, size_t count)
|
|
|
|
{
|
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
struct ap_queue_status status;
|
2023-11-09 11:24:20 +01:00
|
|
|
struct ap_tapq_hwinfo hwinfo;
|
2023-03-10 17:46:49 +01:00
|
|
|
unsigned int value;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
if (!ap_q_supports_assoc(aq))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* association index needs to be >= 0 */
|
|
|
|
rc = kstrtouint(buf, 0, &value);
|
|
|
|
if (rc)
|
|
|
|
return rc;
|
|
|
|
if (value >= ASSOC_IDX_INVALID)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2023-11-09 11:24:20 +01:00
|
|
|
/* check current SE bind state */
|
|
|
|
status = ap_test_queue(aq->qid, 1, &hwinfo);
|
|
|
|
if (status.response_code) {
|
|
|
|
AP_DBF_WARN("%s RC 0x%02x on tapq(0x%02x.%04x)\n",
|
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
return -EIO;
|
|
|
|
}
|
2023-03-10 17:46:49 +01:00
|
|
|
spin_lock_bh(&aq->lock);
|
2023-11-09 11:24:20 +01:00
|
|
|
aq->se_bstate = hwinfo.bs;
|
|
|
|
if (hwinfo.bs != AP_BS_Q_USABLE_NO_SECURE_KEY) {
|
|
|
|
AP_DBF_WARN("%s association attempt with bs %d on queue 0x%02x.%04x\n",
|
|
|
|
__func__, hwinfo.bs,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
|
|
|
rc = -EINVAL;
|
|
|
|
goto out;
|
2023-03-10 17:46:49 +01:00
|
|
|
}
|
|
|
|
|
2023-11-09 11:24:20 +01:00
|
|
|
/* check SM state */
|
|
|
|
if (aq->sm_state != AP_SM_STATE_IDLE) {
|
|
|
|
rc = -EBUSY;
|
|
|
|
goto out;
|
2023-03-10 17:46:49 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* trigger the asynchronous association request */
|
|
|
|
status = ap_aapq(aq->qid, value);
|
|
|
|
switch (status.response_code) {
|
|
|
|
case AP_RESPONSE_NORMAL:
|
|
|
|
case AP_RESPONSE_STATE_CHANGE_IN_PROGRESS:
|
|
|
|
aq->sm_state = AP_SM_STATE_ASSOC_WAIT;
|
|
|
|
aq->assoc_idx = value;
|
|
|
|
ap_wait(ap_sm_event(aq, AP_SM_EVENT_POLL));
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
AP_DBF_WARN("%s RC 0x%02x on aapq(0x%02x.%04x)\n",
|
|
|
|
__func__, status.response_code,
|
|
|
|
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
|
2023-11-09 11:24:20 +01:00
|
|
|
rc = -EIO;
|
|
|
|
goto out;
|
2023-03-10 17:46:49 +01:00
|
|
|
}
|
|
|
|
|
2023-11-09 11:24:20 +01:00
|
|
|
rc = count;
|
|
|
|
|
|
|
|
out:
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
return rc;
|
2023-03-10 17:46:49 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR_RW(se_associate);
|
|
|
|
|
|
|
|
static struct attribute *ap_queue_dev_sb_attrs[] = {
|
|
|
|
&dev_attr_se_bind.attr,
|
|
|
|
&dev_attr_se_associate.attr,
|
|
|
|
NULL
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct attribute_group ap_queue_dev_sb_attr_group = {
|
|
|
|
.attrs = ap_queue_dev_sb_attrs
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct attribute_group *ap_queue_dev_sb_attr_groups[] = {
|
|
|
|
&ap_queue_dev_sb_attr_group,
|
|
|
|
NULL
|
|
|
|
};
|
|
|
|
|
2016-08-25 11:16:03 +02:00
|
|
|
static void ap_queue_device_release(struct device *dev)
|
|
|
|
{
|
2017-05-24 10:26:29 +02:00
|
|
|
struct ap_queue *aq = to_ap_queue(dev);
|
|
|
|
|
2020-05-08 15:51:19 +02:00
|
|
|
spin_lock_bh(&ap_queues_lock);
|
|
|
|
hash_del(&aq->hnode);
|
|
|
|
spin_unlock_bh(&ap_queues_lock);
|
|
|
|
|
2017-05-24 10:26:29 +02:00
|
|
|
kfree(aq);
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
|
2024-09-25 15:31:06 +02:00
|
|
|
struct ap_queue *ap_queue_create(ap_qid_t qid, struct ap_card *ac)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
|
|
|
struct ap_queue *aq;
|
|
|
|
|
|
|
|
aq = kzalloc(sizeof(*aq), GFP_KERNEL);
|
|
|
|
if (!aq)
|
|
|
|
return NULL;
|
2024-09-25 15:31:06 +02:00
|
|
|
aq->card = ac;
|
2016-08-25 11:16:03 +02:00
|
|
|
aq->ap_dev.device.release = ap_queue_device_release;
|
|
|
|
aq->ap_dev.device.type = &ap_queue_type;
|
2024-09-25 15:31:06 +02:00
|
|
|
aq->ap_dev.device_type = ac->ap_dev.device_type;
|
|
|
|
/* in SE environment add bind/associate attributes group */
|
|
|
|
if (ap_is_se_guest() && ap_q_supported_in_se(aq))
|
2023-03-10 17:46:49 +01:00
|
|
|
aq->ap_dev.device.groups = ap_queue_dev_sb_attr_groups;
|
2016-08-25 11:16:03 +02:00
|
|
|
aq->qid = qid;
|
|
|
|
spin_lock_init(&aq->lock);
|
|
|
|
INIT_LIST_HEAD(&aq->pendingq);
|
|
|
|
INIT_LIST_HEAD(&aq->requestq);
|
2017-10-25 03:27:37 -07:00
|
|
|
timer_setup(&aq->timeout, ap_request_timeout, 0);
|
2016-08-25 11:16:03 +02:00
|
|
|
|
|
|
|
return aq;
|
|
|
|
}
|
|
|
|
|
|
|
|
void ap_queue_init_reply(struct ap_queue *aq, struct ap_message *reply)
|
|
|
|
{
|
|
|
|
aq->reply = reply;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
2020-05-26 10:49:33 +02:00
|
|
|
ap_wait(ap_sm_event(aq, AP_SM_EVENT_POLL));
|
2016-08-25 11:16:03 +02:00
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ap_queue_init_reply);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ap_queue_message(): Queue a request to an AP device.
|
|
|
|
* @aq: The AP device to queue the message to
|
|
|
|
* @ap_msg: The message that is to be added
|
|
|
|
*/
|
2020-07-02 11:22:01 +02:00
|
|
|
int ap_queue_message(struct ap_queue *aq, struct ap_message *ap_msg)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
2020-07-02 11:22:01 +02:00
|
|
|
int rc = 0;
|
|
|
|
|
|
|
|
/* msg needs to have a valid receive-callback */
|
2016-08-25 11:16:03 +02:00
|
|
|
BUG_ON(!ap_msg->receive);
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
2020-07-02 11:22:01 +02:00
|
|
|
|
|
|
|
/* only allow to queue new messages if device state is ok */
|
|
|
|
if (aq->dev_state == AP_DEV_STATE_OPERATING) {
|
|
|
|
list_add_tail(&ap_msg->list, &aq->requestq);
|
|
|
|
aq->requestq_count++;
|
|
|
|
aq->total_request_count++;
|
|
|
|
atomic64_inc(&aq->card->total_request_count);
|
2022-04-04 17:12:37 +02:00
|
|
|
} else {
|
2020-07-02 11:22:01 +02:00
|
|
|
rc = -ENODEV;
|
2022-04-04 17:12:37 +02:00
|
|
|
}
|
2020-07-02 11:22:01 +02:00
|
|
|
|
2016-08-25 11:16:03 +02:00
|
|
|
/* Send/receive as many request from the queue as possible. */
|
2020-05-26 10:49:33 +02:00
|
|
|
ap_wait(ap_sm_event_loop(aq, AP_SM_EVENT_POLL));
|
2020-07-02 11:22:01 +02:00
|
|
|
|
2016-08-25 11:16:03 +02:00
|
|
|
spin_unlock_bh(&aq->lock);
|
2020-07-02 11:22:01 +02:00
|
|
|
|
|
|
|
return rc;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ap_queue_message);
|
|
|
|
|
2023-09-12 10:08:51 +02:00
|
|
|
/**
|
|
|
|
* ap_queue_usable(): Check if queue is usable just now.
|
|
|
|
* @aq: The AP queue device to test for usability.
|
|
|
|
* This function is intended for the scheduler to query if it makes
|
|
|
|
* sense to enqueue a message into this AP queue device by calling
|
|
|
|
* ap_queue_message(). The perspective is very short-term as the
|
|
|
|
* state machine and device state(s) may change at any time.
|
|
|
|
*/
|
|
|
|
bool ap_queue_usable(struct ap_queue *aq)
|
|
|
|
{
|
|
|
|
bool rc = true;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
|
|
|
|
/* check for not configured or checkstopped */
|
|
|
|
if (!aq->config || aq->chkstop) {
|
|
|
|
rc = false;
|
|
|
|
goto unlock_and_out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* device state needs to be ok */
|
|
|
|
if (aq->dev_state != AP_DEV_STATE_OPERATING) {
|
|
|
|
rc = false;
|
|
|
|
goto unlock_and_out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* SE guest's queues additionally need to be bound */
|
2024-09-25 15:31:06 +02:00
|
|
|
if (ap_is_se_guest()) {
|
|
|
|
if (!ap_q_supported_in_se(aq)) {
|
|
|
|
rc = false;
|
|
|
|
goto unlock_and_out;
|
|
|
|
}
|
|
|
|
if (ap_q_needs_bind(aq) &&
|
|
|
|
!(aq->se_bstate == AP_BS_Q_USABLE ||
|
|
|
|
aq->se_bstate == AP_BS_Q_USABLE_NO_SECURE_KEY))
|
|
|
|
rc = false;
|
|
|
|
}
|
2023-09-12 10:08:51 +02:00
|
|
|
|
|
|
|
unlock_and_out:
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ap_queue_usable);
|
|
|
|
|
2016-08-25 11:16:03 +02:00
|
|
|
/**
|
|
|
|
* ap_cancel_message(): Cancel a crypto request.
|
|
|
|
* @aq: The AP device that has the message queued
|
|
|
|
* @ap_msg: The message that is to be removed
|
|
|
|
*
|
|
|
|
* Cancel a crypto request. This is done by removing the request
|
|
|
|
* from the device pending or request queue. Note that the
|
|
|
|
* request stays on the AP queue. When it finishes the message
|
|
|
|
* reply will be discarded because the psmid can't be found.
|
|
|
|
*/
|
|
|
|
void ap_cancel_message(struct ap_queue *aq, struct ap_message *ap_msg)
|
|
|
|
{
|
|
|
|
struct ap_message *tmp;
|
|
|
|
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
if (!list_empty(&ap_msg->list)) {
|
|
|
|
list_for_each_entry(tmp, &aq->pendingq, list)
|
|
|
|
if (tmp->psmid == ap_msg->psmid) {
|
|
|
|
aq->pendingq_count--;
|
|
|
|
goto found;
|
|
|
|
}
|
|
|
|
aq->requestq_count--;
|
|
|
|
found:
|
|
|
|
list_del_init(&ap_msg->list);
|
|
|
|
}
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ap_cancel_message);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* __ap_flush_queue(): Flush requests.
|
|
|
|
* @aq: Pointer to the AP queue
|
|
|
|
*
|
|
|
|
* Flush all requests from the request/pending queue of an AP device.
|
|
|
|
*/
|
|
|
|
static void __ap_flush_queue(struct ap_queue *aq)
|
|
|
|
{
|
|
|
|
struct ap_message *ap_msg, *next;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(ap_msg, next, &aq->pendingq, list) {
|
|
|
|
list_del_init(&ap_msg->list);
|
|
|
|
aq->pendingq_count--;
|
|
|
|
ap_msg->rc = -EAGAIN;
|
|
|
|
ap_msg->receive(aq, ap_msg, NULL);
|
|
|
|
}
|
|
|
|
list_for_each_entry_safe(ap_msg, next, &aq->requestq, list) {
|
|
|
|
list_del_init(&ap_msg->list);
|
|
|
|
aq->requestq_count--;
|
|
|
|
ap_msg->rc = -EAGAIN;
|
|
|
|
ap_msg->receive(aq, ap_msg, NULL);
|
|
|
|
}
|
2019-02-05 17:22:36 +01:00
|
|
|
aq->queue_count = 0;
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
void ap_flush_queue(struct ap_queue *aq)
|
|
|
|
{
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
__ap_flush_queue(aq);
|
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ap_flush_queue);
|
|
|
|
|
2019-02-22 17:24:11 +01:00
|
|
|
void ap_queue_prepare_remove(struct ap_queue *aq)
|
2016-08-25 11:16:03 +02:00
|
|
|
{
|
2019-02-22 17:24:11 +01:00
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
/* flush queue */
|
|
|
|
__ap_flush_queue(aq);
|
2020-07-02 11:22:01 +02:00
|
|
|
/* move queue device state to SHUTDOWN in progress */
|
|
|
|
aq->dev_state = AP_DEV_STATE_SHUTDOWN;
|
2019-02-22 17:24:11 +01:00
|
|
|
spin_unlock_bh(&aq->lock);
|
2025-04-05 10:17:26 +02:00
|
|
|
timer_delete_sync(&aq->timeout);
|
2019-02-22 17:24:11 +01:00
|
|
|
}
|
2018-11-09 14:59:24 +01:00
|
|
|
|
2019-02-22 17:24:11 +01:00
|
|
|
void ap_queue_remove(struct ap_queue *aq)
|
|
|
|
{
|
|
|
|
/*
|
2020-07-02 11:22:01 +02:00
|
|
|
* all messages have been flushed and the device state
|
|
|
|
* is SHUTDOWN. Now reset with zero which also clears
|
|
|
|
* the irq registration and move the device state
|
|
|
|
* to the initial value AP_DEV_STATE_UNINITIATED.
|
2019-02-22 17:24:11 +01:00
|
|
|
*/
|
2018-11-09 14:59:24 +01:00
|
|
|
spin_lock_bh(&aq->lock);
|
2022-09-07 18:04:03 +02:00
|
|
|
ap_zapq(aq->qid, 0);
|
2020-07-02 11:22:01 +02:00
|
|
|
aq->dev_state = AP_DEV_STATE_UNINITIATED;
|
2018-11-09 14:59:24 +01:00
|
|
|
spin_unlock_bh(&aq->lock);
|
2016-08-25 11:16:03 +02:00
|
|
|
}
|
2018-11-09 14:59:24 +01:00
|
|
|
|
2023-09-12 09:54:25 +02:00
|
|
|
void _ap_queue_init_state(struct ap_queue *aq)
|
2018-11-09 14:59:24 +01:00
|
|
|
{
|
2020-07-02 11:22:01 +02:00
|
|
|
aq->dev_state = AP_DEV_STATE_OPERATING;
|
2020-05-26 10:49:33 +02:00
|
|
|
aq->sm_state = AP_SM_STATE_RESET_START;
|
2021-11-11 14:31:46 +01:00
|
|
|
aq->last_err_rc = 0;
|
2023-03-10 17:46:49 +01:00
|
|
|
aq->assoc_idx = ASSOC_IDX_INVALID;
|
2020-05-26 10:49:33 +02:00
|
|
|
ap_wait(ap_sm_event(aq, AP_SM_EVENT_POLL));
|
2023-09-12 09:54:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
void ap_queue_init_state(struct ap_queue *aq)
|
|
|
|
{
|
|
|
|
spin_lock_bh(&aq->lock);
|
|
|
|
_ap_queue_init_state(aq);
|
2018-11-09 14:59:24 +01:00
|
|
|
spin_unlock_bh(&aq->lock);
|
|
|
|
}
|
2019-11-22 16:30:06 +01:00
|
|
|
EXPORT_SYMBOL(ap_queue_init_state);
|