2019-05-29 16:57:47 -07:00
// SPDX-License-Identifier: GPL-2.0-only
2008-11-26 17:21:24 +01:00
/*
* Copyright ( C ) 2007 - 2008 Advanced Micro Devices , Inc .
2015-02-04 16:12:55 +01:00
* Author : Joerg Roedel < jroedel @ suse . de >
2008-11-26 17:21:24 +01:00
*/
2015-05-28 18:41:24 +02:00
# define pr_fmt(fmt) "iommu: " fmt
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
2022-08-15 17:20:05 +01:00
# include <linux/amba/bus.h>
2011-09-06 16:03:26 +02:00
# include <linux/device.h>
2011-09-02 13:32:32 -04:00
# include <linux/kernel.h>
2021-06-16 06:38:46 -07:00
# include <linux/bits.h>
2008-11-26 17:21:24 +01:00
# include <linux/bug.h>
# include <linux/types.h>
2018-12-01 14:19:09 -05:00
# include <linux/init.h>
# include <linux/export.h>
2009-05-06 16:03:07 -07:00
# include <linux/slab.h>
2008-11-26 17:21:24 +01:00
# include <linux/errno.h>
2022-08-15 17:20:05 +01:00
# include <linux/host1x_context_bus.h>
2008-11-26 17:21:24 +01:00
# include <linux/iommu.h>
2025-03-24 21:05:17 -07:00
# include <linux/iommufd.h>
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
# include <linux/idr.h>
# include <linux/err.h>
2014-07-03 09:51:18 -06:00
# include <linux/pci.h>
2022-10-31 08:59:06 +08:00
# include <linux/pci-ats.h>
2014-09-19 10:03:06 -06:00
# include <linux/bitops.h>
2022-08-15 17:20:05 +01:00
# include <linux/platform_device.h>
2016-09-13 10:54:14 +01:00
# include <linux/property.h>
2018-09-10 19:19:18 +05:30
# include <linux/fsl/mc.h>
2019-12-19 12:03:41 +00:00
# include <linux/module.h>
2021-09-08 17:58:39 -05:00
# include <linux/cc_platform.h>
2023-03-13 18:56:31 +05:30
# include <linux/cdx/cdx_bus.h>
2013-08-15 11:59:23 -06:00
# include <trace/events/iommu.h>
2022-10-31 08:59:10 +08:00
# include <linux/sched/mm.h>
2022-12-09 13:23:08 -04:00
# include <linux/msi.h>
2024-10-28 09:38:01 +00:00
# include <uapi/linux/iommufd.h>
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
2022-08-16 18:28:05 +01:00
# include "dma-iommu.h"
2023-07-17 15:12:09 -03:00
# include "iommu-priv.h"
2022-08-16 18:28:05 +01:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
static struct kset * iommu_group_kset ;
2016-06-28 20:38:36 +02:00
static DEFINE_IDA ( iommu_group_ida ) ;
2023-08-09 20:47:55 +08:00
static DEFINE_IDA ( iommu_global_pasid_ida ) ;
2019-08-19 15:22:54 +02:00
static unsigned int iommu_def_domain_type __read_mostly ;
2021-08-11 13:21:37 +01:00
static bool iommu_dma_strict __read_mostly = IS_ENABLED ( CONFIG_IOMMU_DEFAULT_DMA_STRICT ) ;
2019-08-19 15:22:46 +02:00
static u32 iommu_cmd_line __read_mostly ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
2025-02-25 17:18:48 -08:00
/* Tags used with xa_tag_pointer() in group->pasid_array */
enum { IOMMU_PASID_ARRAY_DOMAIN = 0 , IOMMU_PASID_ARRAY_HANDLE = 1 } ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
struct iommu_group {
struct kobject kobj ;
struct kobject * devices_kobj ;
struct list_head devices ;
2022-10-31 08:59:09 +08:00
struct xarray pasid_array ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
struct mutex mutex ;
void * iommu_data ;
void ( * iommu_data_release ) ( void * iommu_data ) ;
char * name ;
int id ;
2015-05-28 18:41:29 +02:00
struct iommu_domain * default_domain ;
2022-05-09 13:19:19 -03:00
struct iommu_domain * blocking_domain ;
2015-05-28 18:41:31 +02:00
struct iommu_domain * domain ;
2020-04-29 15:36:47 +02:00
struct list_head entry ;
2022-04-18 08:49:50 +08:00
unsigned int owner_cnt ;
void * owner ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
} ;
2017-02-01 12:19:46 +01:00
struct group_device {
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
struct list_head list ;
struct device * dev ;
char * name ;
} ;
2023-05-11 01:42:00 -03:00
/* Iterate over each struct group_device in a struct iommu_group */
# define for_each_group_device(group, pos) \
list_for_each_entry ( pos , & ( group ) - > devices , list )
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
struct iommu_group_attribute {
struct attribute attr ;
ssize_t ( * show ) ( struct iommu_group * group , char * buf ) ;
ssize_t ( * store ) ( struct iommu_group * group ,
const char * buf , size_t count ) ;
} ;
2017-01-19 20:57:52 +00:00
static const char * const iommu_group_resv_type_string [ ] = {
2019-06-03 08:53:35 +02:00
[ IOMMU_RESV_DIRECT ] = " direct " ,
[ IOMMU_RESV_DIRECT_RELAXABLE ] = " direct-relaxable " ,
[ IOMMU_RESV_RESERVED ] = " reserved " ,
[ IOMMU_RESV_MSI ] = " msi " ,
[ IOMMU_RESV_SW_MSI ] = " msi " ,
2017-01-19 20:57:52 +00:00
} ;
2019-08-19 15:22:46 +02:00
# define IOMMU_CMD_LINE_DMA_API BIT(0)
2021-04-01 17:52:54 +02:00
# define IOMMU_CMD_LINE_STRICT BIT(1)
2019-08-19 15:22:46 +02:00
2024-10-28 17:58:38 +00:00
static int bus_iommu_probe ( const struct bus_type * bus ) ;
2022-08-15 17:20:05 +01:00
static int iommu_bus_notifier ( struct notifier_block * nb ,
unsigned long action , void * data ) ;
2023-04-12 11:10:45 -03:00
static void iommu_release_device ( struct device * dev ) ;
2020-04-29 15:36:46 +02:00
static int __iommu_attach_device ( struct iommu_domain * domain ,
struct device * dev ) ;
static int __iommu_attach_group ( struct iommu_domain * domain ,
struct iommu_group * group ) ;
2024-10-28 09:38:01 +00:00
static struct iommu_domain * __iommu_paging_domain_alloc_flags ( struct device * dev ,
unsigned int type ,
unsigned int flags ) ;
2023-05-11 01:42:01 -03:00
enum {
IOMMU_SET_DOMAIN_MUST_SUCCEED = 1 < < 0 ,
} ;
2023-05-11 01:42:06 -03:00
static int __iommu_device_set_domain ( struct iommu_group * group ,
struct device * dev ,
struct iommu_domain * new_domain ,
unsigned int flags ) ;
2023-05-11 01:42:01 -03:00
static int __iommu_group_set_domain_internal ( struct iommu_group * group ,
struct iommu_domain * new_domain ,
unsigned int flags ) ;
2022-05-09 13:19:19 -03:00
static int __iommu_group_set_domain ( struct iommu_group * group ,
2023-05-11 01:42:01 -03:00
struct iommu_domain * new_domain )
{
return __iommu_group_set_domain_internal ( group , new_domain , 0 ) ;
}
static void __iommu_group_set_domain_nofail ( struct iommu_group * group ,
struct iommu_domain * new_domain )
{
WARN_ON ( __iommu_group_set_domain_internal (
group , new_domain , IOMMU_SET_DOMAIN_MUST_SUCCEED ) ) ;
}
2023-05-11 01:42:12 -03:00
static int iommu_setup_default_domain ( struct iommu_group * group ,
int target_type ) ;
static int iommu_create_device_direct_mappings ( struct iommu_domain * domain ,
2020-04-29 15:36:50 +02:00
struct device * dev ) ;
2020-11-24 21:06:02 +08:00
static ssize_t iommu_group_store_type ( struct iommu_group * group ,
const char * buf , size_t count ) ;
2023-06-05 21:59:47 -03:00
static struct group_device * iommu_group_alloc_device ( struct iommu_group * group ,
struct device * dev ) ;
2023-06-05 21:59:48 -03:00
static void __iommu_group_free_device ( struct iommu_group * group ,
struct group_device * grp_dev ) ;
2024-10-28 09:38:10 +00:00
static void iommu_domain_init ( struct iommu_domain * domain , unsigned int type ,
const struct iommu_ops * ops ) ;
2020-04-29 15:36:46 +02:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
# define IOMMU_GROUP_ATTR(_name, _mode, _show, _store) \
struct iommu_group_attribute iommu_group_attr_ # # _name = \
__ATTR ( _name , _mode , _show , _store )
2008-11-26 17:21:24 +01:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
# define to_iommu_group_attr(_attr) \
container_of ( _attr , struct iommu_group_attribute , attr )
# define to_iommu_group(_kobj) \
container_of ( _kobj , struct iommu_group , kobj )
2008-11-26 17:21:24 +01:00
2017-02-01 13:23:08 +01:00
static LIST_HEAD ( iommu_device_list ) ;
static DEFINE_SPINLOCK ( iommu_device_lock ) ;
2023-11-21 18:04:02 +00:00
static const struct bus_type * const iommu_buses [ ] = {
2022-08-15 17:20:05 +01:00
& platform_bus_type ,
# ifdef CONFIG_PCI
& pci_bus_type ,
# endif
# ifdef CONFIG_ARM_AMBA
& amba_bustype ,
# endif
# ifdef CONFIG_FSL_MC_BUS
& fsl_mc_bus_type ,
# endif
# ifdef CONFIG_TEGRA_HOST1X_CONTEXT_BUS
& host1x_context_device_bus_type ,
# endif
2023-03-13 18:56:31 +05:30
# ifdef CONFIG_CDX_BUS
& cdx_bus_type ,
# endif
2022-08-15 17:20:05 +01:00
} ;
2019-08-19 15:22:53 +02:00
/*
* Use a function instead of an array here because the domain - type is a
* bit - field , so an array would waste memory .
*/
static const char * iommu_domain_type_str ( unsigned int t )
{
switch ( t ) {
case IOMMU_DOMAIN_BLOCKED :
return " Blocked " ;
case IOMMU_DOMAIN_IDENTITY :
return " Passthrough " ;
case IOMMU_DOMAIN_UNMANAGED :
return " Unmanaged " ;
case IOMMU_DOMAIN_DMA :
2021-08-11 13:21:30 +01:00
case IOMMU_DOMAIN_DMA_FQ :
2019-08-19 15:22:53 +02:00
return " Translated " ;
2023-09-13 10:43:35 -03:00
case IOMMU_DOMAIN_PLATFORM :
return " Platform " ;
2019-08-19 15:22:53 +02:00
default :
return " Unknown " ;
}
}
static int __init iommu_subsys_init ( void )
{
2022-08-15 17:20:05 +01:00
struct notifier_block * nb ;
2021-04-01 17:52:53 +02:00
if ( ! ( iommu_cmd_line & IOMMU_CMD_LINE_DMA_API ) ) {
2019-08-19 15:22:54 +02:00
if ( IS_ENABLED ( CONFIG_IOMMU_DEFAULT_PASSTHROUGH ) )
iommu_set_default_passthrough ( false ) ;
else
iommu_set_default_translated ( false ) ;
2019-08-19 15:22:55 +02:00
2021-09-08 17:58:39 -05:00
if ( iommu_default_passthrough ( ) & & cc_platform_has ( CC_ATTR_MEM_ENCRYPT ) ) {
2019-09-03 15:15:44 +02:00
pr_info ( " Memory encryption detected - Disabling default IOMMU Passthrough \n " ) ;
2019-08-19 15:22:55 +02:00
iommu_set_default_translated ( false ) ;
}
2019-08-19 15:22:54 +02:00
}
2021-08-11 13:21:34 +01:00
if ( ! iommu_default_passthrough ( ) & & ! iommu_dma_strict )
iommu_def_domain_type = IOMMU_DOMAIN_DMA_FQ ;
2023-05-09 12:10:48 -07:00
pr_info ( " Default domain type: %s%s \n " ,
2019-08-19 15:22:54 +02:00
iommu_domain_type_str ( iommu_def_domain_type ) ,
2021-04-01 17:52:53 +02:00
( iommu_cmd_line & IOMMU_CMD_LINE_DMA_API ) ?
2023-05-09 12:10:48 -07:00
" (set via kernel command line) " : " " ) ;
2019-08-19 15:22:53 +02:00
2021-08-11 13:21:36 +01:00
if ( ! iommu_default_passthrough ( ) )
2023-05-09 12:10:48 -07:00
pr_info ( " DMA domain TLB invalidation policy: %s mode%s \n " ,
2021-08-11 13:21:36 +01:00
iommu_dma_strict ? " strict " : " lazy " ,
( iommu_cmd_line & IOMMU_CMD_LINE_STRICT ) ?
2023-05-09 12:10:48 -07:00
" (set via kernel command line) " : " " ) ;
2021-07-12 19:12:16 +08:00
2022-08-15 17:20:05 +01:00
nb = kcalloc ( ARRAY_SIZE ( iommu_buses ) , sizeof ( * nb ) , GFP_KERNEL ) ;
if ( ! nb )
return - ENOMEM ;
for ( int i = 0 ; i < ARRAY_SIZE ( iommu_buses ) ; i + + ) {
nb [ i ] . notifier_call = iommu_bus_notifier ;
bus_register_notifier ( iommu_buses [ i ] , & nb [ i ] ) ;
}
2019-08-19 15:22:53 +02:00
return 0 ;
}
subsys_initcall ( iommu_subsys_init ) ;
2022-08-15 17:20:06 +01:00
static int remove_iommu_group ( struct device * dev , void * data )
{
if ( dev - > iommu & & dev - > iommu - > iommu_dev = = data )
iommu_release_device ( dev ) ;
return 0 ;
}
2021-04-01 14:56:26 +01:00
/**
* iommu_device_register ( ) - Register an IOMMU hardware instance
* @ iommu : IOMMU handle for the instance
* @ ops : IOMMU ops to associate with the instance
* @ hwdev : ( optional ) actual instance device , used for fwnode lookup
*
* Return : 0 on success , or an error .
*/
int iommu_device_register ( struct iommu_device * iommu ,
const struct iommu_ops * ops , struct device * hwdev )
2017-02-01 13:23:08 +01:00
{
2022-08-15 17:20:06 +01:00
int err = 0 ;
2021-04-01 14:56:26 +01:00
/* We need to be able to take module references appropriately */
if ( WARN_ON ( is_module_address ( ( unsigned long ) ops ) & & ! ops - > owner ) )
return - EINVAL ;
iommu - > ops = ops ;
if ( hwdev )
2022-08-01 19:47:58 +03:00
iommu - > fwnode = dev_fwnode ( hwdev ) ;
2021-04-01 14:56:26 +01:00
2017-02-01 13:23:08 +01:00
spin_lock ( & iommu_device_lock ) ;
list_add_tail ( & iommu - > list , & iommu_device_list ) ;
spin_unlock ( & iommu_device_lock ) ;
2022-08-15 17:20:06 +01:00
2023-11-21 18:04:02 +00:00
for ( int i = 0 ; i < ARRAY_SIZE ( iommu_buses ) & & ! err ; i + + )
2022-08-15 17:20:06 +01:00
err = bus_iommu_probe ( iommu_buses [ i ] ) ;
if ( err )
iommu_device_unregister ( iommu ) ;
2025-04-24 18:41:28 +01:00
else
WRITE_ONCE ( iommu - > ready , true ) ;
2022-08-15 17:20:06 +01:00
return err ;
2017-02-01 13:23:08 +01:00
}
2019-12-19 12:03:37 +00:00
EXPORT_SYMBOL_GPL ( iommu_device_register ) ;
2017-02-01 13:23:08 +01:00
void iommu_device_unregister ( struct iommu_device * iommu )
{
2022-08-15 17:20:06 +01:00
for ( int i = 0 ; i < ARRAY_SIZE ( iommu_buses ) ; i + + )
bus_for_each_dev ( iommu_buses [ i ] , NULL , iommu , remove_iommu_group ) ;
2017-02-01 13:23:08 +01:00
spin_lock ( & iommu_device_lock ) ;
list_del ( & iommu - > list ) ;
spin_unlock ( & iommu_device_lock ) ;
2023-08-22 13:15:57 -03:00
/* Pairs with the alloc in generic_single_device_group() */
iommu_group_put ( iommu - > singleton_group ) ;
iommu - > singleton_group = NULL ;
2017-02-01 13:23:08 +01:00
}
2019-12-19 12:03:37 +00:00
EXPORT_SYMBOL_GPL ( iommu_device_unregister ) ;
2017-02-01 13:23:08 +01:00
2023-08-02 21:08:02 -03:00
# if IS_ENABLED(CONFIG_IOMMUFD_TEST)
void iommu_device_unregister_bus ( struct iommu_device * iommu ,
2024-02-16 15:40:24 +01:00
const struct bus_type * bus ,
2023-08-02 21:08:02 -03:00
struct notifier_block * nb )
{
bus_unregister_notifier ( bus , nb ) ;
iommu_device_unregister ( iommu ) ;
}
EXPORT_SYMBOL_GPL ( iommu_device_unregister_bus ) ;
/*
* Register an iommu driver against a single bus . This is only used by iommufd
* selftest to create a mock iommu driver . The caller must provide
* some memory to hold a notifier_block .
*/
int iommu_device_register_bus ( struct iommu_device * iommu ,
2024-02-16 15:40:24 +01:00
const struct iommu_ops * ops ,
const struct bus_type * bus ,
2023-08-02 21:08:02 -03:00
struct notifier_block * nb )
{
int err ;
iommu - > ops = ops ;
nb - > notifier_call = iommu_bus_notifier ;
err = bus_register_notifier ( bus , nb ) ;
if ( err )
return err ;
spin_lock ( & iommu_device_lock ) ;
list_add_tail ( & iommu - > list , & iommu_device_list ) ;
spin_unlock ( & iommu_device_lock ) ;
err = bus_iommu_probe ( bus ) ;
if ( err ) {
iommu_device_unregister_bus ( iommu , bus , nb ) ;
return err ;
}
return 0 ;
}
EXPORT_SYMBOL_GPL ( iommu_device_register_bus ) ;
# endif
2020-03-26 16:08:30 +01:00
static struct dev_iommu * dev_iommu_get ( struct device * dev )
2019-06-03 15:57:48 +01:00
{
2020-03-26 16:08:30 +01:00
struct dev_iommu * param = dev - > iommu ;
2019-06-03 15:57:48 +01:00
2023-12-07 14:03:11 -04:00
lockdep_assert_held ( & iommu_probe_device_lock ) ;
2019-06-03 15:57:48 +01:00
if ( param )
return param ;
param = kzalloc ( sizeof ( * param ) , GFP_KERNEL ) ;
if ( ! param )
return NULL ;
mutex_init ( & param - > lock ) ;
2020-03-26 16:08:30 +01:00
dev - > iommu = param ;
2019-06-03 15:57:48 +01:00
return param ;
}
2025-02-28 15:46:32 +00:00
void dev_iommu_free ( struct device * dev )
2019-06-03 15:57:48 +01:00
{
2022-01-31 12:42:35 +05:30
struct dev_iommu * param = dev - > iommu ;
2020-03-26 16:08:30 +01:00
dev - > iommu = NULL ;
2022-01-31 12:42:35 +05:30
if ( param - > fwspec ) {
fwnode_handle_put ( param - > fwspec - > iommu_fwnode ) ;
kfree ( param - > fwspec ) ;
}
kfree ( param ) ;
2019-06-03 15:57:48 +01:00
}
2023-11-21 18:03:57 +00:00
/*
* Internal equivalent of device_iommu_mapped ( ) for when we care that a device
* actually has API ops , and don ' t want false positives from VFIO - only groups .
*/
static bool dev_has_iommu ( struct device * dev )
{
return dev - > iommu & & dev - > iommu - > iommu_dev ;
}
2022-10-31 08:59:06 +08:00
static u32 dev_iommu_get_max_pasids ( struct device * dev )
{
u32 max_pasids = 0 , bits = 0 ;
int ret ;
if ( dev_is_pci ( dev ) ) {
ret = pci_max_pasids ( to_pci_dev ( dev ) ) ;
if ( ret > 0 )
max_pasids = ret ;
} else {
ret = device_property_read_u32 ( dev , " pasid-num-bits " , & bits ) ;
if ( ! ret )
max_pasids = 1UL < < bits ;
}
return min_t ( u32 , max_pasids , dev - > iommu - > iommu_dev - > max_pasids ) ;
}
2023-12-07 14:03:12 -04:00
void dev_iommu_priv_set ( struct device * dev , void * priv )
{
/* FSL_PAMU does something weird */
if ( ! IS_ENABLED ( CONFIG_FSL_PAMU ) )
lockdep_assert_held ( & iommu_probe_device_lock ) ;
dev - > iommu - > priv = priv ;
}
EXPORT_SYMBOL_GPL ( dev_iommu_priv_set ) ;
2023-06-05 21:59:43 -03:00
/*
* Init the dev - > iommu and dev - > iommu_group in the struct device and get the
* driver probed
*/
2025-02-28 15:46:31 +00:00
static int iommu_init_device ( struct device * dev )
2018-11-30 10:31:59 +01:00
{
2025-02-28 15:46:31 +00:00
const struct iommu_ops * ops ;
2020-04-29 15:36:45 +02:00
struct iommu_device * iommu_dev ;
struct iommu_group * group ;
2019-06-03 15:57:48 +01:00
int ret ;
2018-11-30 10:31:59 +01:00
2023-06-05 21:59:43 -03:00
if ( ! dev_iommu_get ( dev ) )
return - ENOMEM ;
2025-02-28 15:46:31 +00:00
/*
iommu: Get DT/ACPI parsing into the proper probe path
In hindsight, there were some crucial subtleties overlooked when moving
{of,acpi}_dma_configure() to driver probe time to allow waiting for
IOMMU drivers with -EPROBE_DEFER, and these have become an
ever-increasing source of problems. The IOMMU API has some fundamental
assumptions that iommu_probe_device() is called for every device added
to the system, in the order in which they are added. Calling it in a
random order or not at all dependent on driver binding leads to
malformed groups, a potential lack of isolation for devices with no
driver, and all manner of unexpected concurrency and race conditions.
We've attempted to mitigate the latter with point-fix bodges like
iommu_probe_device_lock, but it's a losing battle and the time has come
to bite the bullet and address the true source of the problem instead.
The crux of the matter is that the firmware parsing actually serves two
distinct purposes; one is identifying the IOMMU instance associated with
a device so we can check its availability, the second is actually
telling that instance about the relevant firmware-provided data for the
device. However the latter also depends on the former, and at the time
there was no good place to defer and retry that separately from the
availability check we also wanted for client driver probe.
Nowadays, though, we have a proper notion of multiple IOMMU instances in
the core API itself, and each one gets a chance to probe its own devices
upon registration, so we can finally make that work as intended for
DT/IORT/VIOT platforms too. All we need is for iommu_probe_device() to
be able to run the iommu_fwspec machinery currently buried deep in the
wrong end of {of,acpi}_dma_configure(). Luckily it turns out to be
surprisingly straightforward to bootstrap this transformation by pretty
much just calling the same path twice. At client driver probe time,
dev->driver is obviously set; conversely at device_add(), or a
subsequent bus_iommu_probe(), any device waiting for an IOMMU really
should *not* have a driver already, so we can use that as a condition to
disambiguate the two cases, and avoid recursing back into the IOMMU core
at the wrong times.
Obviously this isn't the nicest thing, but for now it gives us a
functional baseline to then unpick the layers in between without many
more awkward cross-subsystem patches. There are some minor side-effects
like dma_range_map potentially being created earlier, and some debug
prints being repeated, but these aren't significantly detrimental. Let's
make things work first, then deal with making them nice.
With the basic flow finally in the right order again, the next step is
probably turning the bus->dma_configure paths inside-out, since all we
really need from bus code is its notion of which device and input ID(s)
to parse the common firmware properties with...
Acked-by: Bjorn Helgaas <bhelgaas@google.com> # pci-driver.c
Acked-by: Rob Herring (Arm) <robh@kernel.org> # of/device.c
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Lorenzo Pieralisi <lpieralisi@kernel.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/e3b191e6fd6ca9a1e84c5e5e40044faf97abb874.1740753261.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2025-02-28 15:46:33 +00:00
* For FDT - based systems and ACPI IORT / VIOT , the common firmware parsing
* is buried in the bus dma_configure path . Properly unpicking that is
* still a big job , so for now just invoke the whole thing . The device
* already having a driver bound means dma_configure has already run and
2025-03-11 15:19:25 +00:00
* found no IOMMU to wait for , so there ' s no point calling it again .
iommu: Get DT/ACPI parsing into the proper probe path
In hindsight, there were some crucial subtleties overlooked when moving
{of,acpi}_dma_configure() to driver probe time to allow waiting for
IOMMU drivers with -EPROBE_DEFER, and these have become an
ever-increasing source of problems. The IOMMU API has some fundamental
assumptions that iommu_probe_device() is called for every device added
to the system, in the order in which they are added. Calling it in a
random order or not at all dependent on driver binding leads to
malformed groups, a potential lack of isolation for devices with no
driver, and all manner of unexpected concurrency and race conditions.
We've attempted to mitigate the latter with point-fix bodges like
iommu_probe_device_lock, but it's a losing battle and the time has come
to bite the bullet and address the true source of the problem instead.
The crux of the matter is that the firmware parsing actually serves two
distinct purposes; one is identifying the IOMMU instance associated with
a device so we can check its availability, the second is actually
telling that instance about the relevant firmware-provided data for the
device. However the latter also depends on the former, and at the time
there was no good place to defer and retry that separately from the
availability check we also wanted for client driver probe.
Nowadays, though, we have a proper notion of multiple IOMMU instances in
the core API itself, and each one gets a chance to probe its own devices
upon registration, so we can finally make that work as intended for
DT/IORT/VIOT platforms too. All we need is for iommu_probe_device() to
be able to run the iommu_fwspec machinery currently buried deep in the
wrong end of {of,acpi}_dma_configure(). Luckily it turns out to be
surprisingly straightforward to bootstrap this transformation by pretty
much just calling the same path twice. At client driver probe time,
dev->driver is obviously set; conversely at device_add(), or a
subsequent bus_iommu_probe(), any device waiting for an IOMMU really
should *not* have a driver already, so we can use that as a condition to
disambiguate the two cases, and avoid recursing back into the IOMMU core
at the wrong times.
Obviously this isn't the nicest thing, but for now it gives us a
functional baseline to then unpick the layers in between without many
more awkward cross-subsystem patches. There are some minor side-effects
like dma_range_map potentially being created earlier, and some debug
prints being repeated, but these aren't significantly detrimental. Let's
make things work first, then deal with making them nice.
With the basic flow finally in the right order again, the next step is
probably turning the bus->dma_configure paths inside-out, since all we
really need from bus code is its notion of which device and input ID(s)
to parse the common firmware properties with...
Acked-by: Bjorn Helgaas <bhelgaas@google.com> # pci-driver.c
Acked-by: Rob Herring (Arm) <robh@kernel.org> # of/device.c
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Lorenzo Pieralisi <lpieralisi@kernel.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/e3b191e6fd6ca9a1e84c5e5e40044faf97abb874.1740753261.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2025-02-28 15:46:33 +00:00
*/
2025-03-11 15:19:25 +00:00
if ( ! dev - > iommu - > fwspec & & ! dev - > driver & & dev - > bus - > dma_configure ) {
iommu: Get DT/ACPI parsing into the proper probe path
In hindsight, there were some crucial subtleties overlooked when moving
{of,acpi}_dma_configure() to driver probe time to allow waiting for
IOMMU drivers with -EPROBE_DEFER, and these have become an
ever-increasing source of problems. The IOMMU API has some fundamental
assumptions that iommu_probe_device() is called for every device added
to the system, in the order in which they are added. Calling it in a
random order or not at all dependent on driver binding leads to
malformed groups, a potential lack of isolation for devices with no
driver, and all manner of unexpected concurrency and race conditions.
We've attempted to mitigate the latter with point-fix bodges like
iommu_probe_device_lock, but it's a losing battle and the time has come
to bite the bullet and address the true source of the problem instead.
The crux of the matter is that the firmware parsing actually serves two
distinct purposes; one is identifying the IOMMU instance associated with
a device so we can check its availability, the second is actually
telling that instance about the relevant firmware-provided data for the
device. However the latter also depends on the former, and at the time
there was no good place to defer and retry that separately from the
availability check we also wanted for client driver probe.
Nowadays, though, we have a proper notion of multiple IOMMU instances in
the core API itself, and each one gets a chance to probe its own devices
upon registration, so we can finally make that work as intended for
DT/IORT/VIOT platforms too. All we need is for iommu_probe_device() to
be able to run the iommu_fwspec machinery currently buried deep in the
wrong end of {of,acpi}_dma_configure(). Luckily it turns out to be
surprisingly straightforward to bootstrap this transformation by pretty
much just calling the same path twice. At client driver probe time,
dev->driver is obviously set; conversely at device_add(), or a
subsequent bus_iommu_probe(), any device waiting for an IOMMU really
should *not* have a driver already, so we can use that as a condition to
disambiguate the two cases, and avoid recursing back into the IOMMU core
at the wrong times.
Obviously this isn't the nicest thing, but for now it gives us a
functional baseline to then unpick the layers in between without many
more awkward cross-subsystem patches. There are some minor side-effects
like dma_range_map potentially being created earlier, and some debug
prints being repeated, but these aren't significantly detrimental. Let's
make things work first, then deal with making them nice.
With the basic flow finally in the right order again, the next step is
probably turning the bus->dma_configure paths inside-out, since all we
really need from bus code is its notion of which device and input ID(s)
to parse the common firmware properties with...
Acked-by: Bjorn Helgaas <bhelgaas@google.com> # pci-driver.c
Acked-by: Rob Herring (Arm) <robh@kernel.org> # of/device.c
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Lorenzo Pieralisi <lpieralisi@kernel.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/e3b191e6fd6ca9a1e84c5e5e40044faf97abb874.1740753261.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2025-02-28 15:46:33 +00:00
mutex_unlock ( & iommu_probe_device_lock ) ;
dev - > bus - > dma_configure ( dev ) ;
mutex_lock ( & iommu_probe_device_lock ) ;
2025-03-11 15:19:25 +00:00
/* If another instance finished the job for us, skip it */
if ( ! dev - > iommu | | dev - > iommu_group )
return - ENODEV ;
iommu: Get DT/ACPI parsing into the proper probe path
In hindsight, there were some crucial subtleties overlooked when moving
{of,acpi}_dma_configure() to driver probe time to allow waiting for
IOMMU drivers with -EPROBE_DEFER, and these have become an
ever-increasing source of problems. The IOMMU API has some fundamental
assumptions that iommu_probe_device() is called for every device added
to the system, in the order in which they are added. Calling it in a
random order or not at all dependent on driver binding leads to
malformed groups, a potential lack of isolation for devices with no
driver, and all manner of unexpected concurrency and race conditions.
We've attempted to mitigate the latter with point-fix bodges like
iommu_probe_device_lock, but it's a losing battle and the time has come
to bite the bullet and address the true source of the problem instead.
The crux of the matter is that the firmware parsing actually serves two
distinct purposes; one is identifying the IOMMU instance associated with
a device so we can check its availability, the second is actually
telling that instance about the relevant firmware-provided data for the
device. However the latter also depends on the former, and at the time
there was no good place to defer and retry that separately from the
availability check we also wanted for client driver probe.
Nowadays, though, we have a proper notion of multiple IOMMU instances in
the core API itself, and each one gets a chance to probe its own devices
upon registration, so we can finally make that work as intended for
DT/IORT/VIOT platforms too. All we need is for iommu_probe_device() to
be able to run the iommu_fwspec machinery currently buried deep in the
wrong end of {of,acpi}_dma_configure(). Luckily it turns out to be
surprisingly straightforward to bootstrap this transformation by pretty
much just calling the same path twice. At client driver probe time,
dev->driver is obviously set; conversely at device_add(), or a
subsequent bus_iommu_probe(), any device waiting for an IOMMU really
should *not* have a driver already, so we can use that as a condition to
disambiguate the two cases, and avoid recursing back into the IOMMU core
at the wrong times.
Obviously this isn't the nicest thing, but for now it gives us a
functional baseline to then unpick the layers in between without many
more awkward cross-subsystem patches. There are some minor side-effects
like dma_range_map potentially being created earlier, and some debug
prints being repeated, but these aren't significantly detrimental. Let's
make things work first, then deal with making them nice.
With the basic flow finally in the right order again, the next step is
probably turning the bus->dma_configure paths inside-out, since all we
really need from bus code is its notion of which device and input ID(s)
to parse the common firmware properties with...
Acked-by: Bjorn Helgaas <bhelgaas@google.com> # pci-driver.c
Acked-by: Rob Herring (Arm) <robh@kernel.org> # of/device.c
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Lorenzo Pieralisi <lpieralisi@kernel.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/e3b191e6fd6ca9a1e84c5e5e40044faf97abb874.1740753261.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2025-02-28 15:46:33 +00:00
}
/*
* At this point , relevant devices either now have a fwspec which will
* match ops registered with a non - NULL fwnode , or we can reasonably
2025-02-28 15:46:31 +00:00
* assume that only one of Intel , AMD , s390 , PAMU or legacy SMMUv2 can
* be present , and that any of their registered instances has suitable
* ops for probing , and thus cheekily co - opt the same mechanism .
*/
ops = iommu_fwspec_ops ( dev - > iommu - > fwspec ) ;
if ( ! ops ) {
ret = - ENODEV ;
goto err_free ;
}
2018-11-30 10:31:59 +01:00
2019-12-19 12:03:41 +00:00
if ( ! try_module_get ( ops - > owner ) ) {
ret = - EINVAL ;
2020-04-29 15:37:11 +02:00
goto err_free ;
2019-12-19 12:03:41 +00:00
}
2020-04-29 15:36:45 +02:00
iommu_dev = ops - > probe_device ( dev ) ;
2020-04-29 15:37:11 +02:00
if ( IS_ERR ( iommu_dev ) ) {
ret = PTR_ERR ( iommu_dev ) ;
2023-06-05 21:59:43 -03:00
goto err_module_put ;
2020-04-29 15:37:11 +02:00
}
2023-08-22 13:15:57 -03:00
dev - > iommu - > iommu_dev = iommu_dev ;
2020-04-29 15:36:45 +02:00
2023-06-05 21:59:44 -03:00
ret = iommu_device_link ( iommu_dev , dev ) ;
if ( ret )
goto err_release ;
2020-04-29 15:36:45 +02:00
2023-06-05 21:59:43 -03:00
group = ops - > device_group ( dev ) ;
if ( WARN_ON_ONCE ( group = = NULL ) )
group = ERR_PTR ( - EINVAL ) ;
2020-04-29 15:36:49 +02:00
if ( IS_ERR ( group ) ) {
2020-04-29 15:36:45 +02:00
ret = PTR_ERR ( group ) ;
2023-06-05 21:59:44 -03:00
goto err_unlink ;
2020-04-29 15:36:45 +02:00
}
2023-06-05 21:59:43 -03:00
dev - > iommu_group = group ;
2020-04-29 15:36:45 +02:00
2023-06-05 21:59:43 -03:00
dev - > iommu - > max_pasids = dev_iommu_get_max_pasids ( dev ) ;
if ( ops - > is_attach_deferred )
dev - > iommu - > attach_deferred = ops - > is_attach_deferred ( dev ) ;
2019-12-19 12:03:41 +00:00
return 0 ;
2018-12-20 10:02:20 +01:00
2023-06-05 21:59:44 -03:00
err_unlink :
iommu_device_unlink ( iommu_dev , dev ) ;
2023-06-05 21:59:43 -03:00
err_release :
2022-06-21 16:14:26 +01:00
if ( ops - > release_device )
ops - > release_device ( dev ) ;
2023-06-05 21:59:43 -03:00
err_module_put :
2019-12-19 12:03:41 +00:00
module_put ( ops - > owner ) ;
2020-04-29 15:37:11 +02:00
err_free :
2023-08-22 13:15:57 -03:00
dev - > iommu - > iommu_dev = NULL ;
2020-03-26 16:08:30 +01:00
dev_iommu_free ( dev ) ;
2023-06-05 21:59:43 -03:00
return ret ;
}
2020-04-29 15:37:11 +02:00
2023-06-05 21:59:43 -03:00
static void iommu_deinit_device ( struct device * dev )
{
struct iommu_group * group = dev - > iommu_group ;
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
2022-11-04 19:51:43 +00:00
2023-06-05 21:59:43 -03:00
lockdep_assert_held ( & group - > mutex ) ;
2023-06-05 21:59:44 -03:00
iommu_device_unlink ( dev - > iommu - > iommu_dev , dev ) ;
2023-06-05 21:59:43 -03:00
/*
* release_device ( ) must stop using any attached domain on the device .
2024-03-05 20:21:17 +08:00
* If there are still other devices in the group , they are not affected
2023-06-05 21:59:43 -03:00
* by this callback .
*
2024-03-05 20:21:17 +08:00
* If the iommu driver provides release_domain , the core code ensures
* that domain is attached prior to calling release_device . Drivers can
* use this to enforce a translation on the idle iommu . Typically , the
* global static blocked_domain is a good choice .
*
* Otherwise , the iommu driver must set the device to either an identity
* or a blocking translation in release_device ( ) and stop using any
* domain pointer , as it is going to be freed .
*
* Regardless , if a delayed attach never occurred , then the release
* should still avoid touching any hardware configuration either .
2023-06-05 21:59:43 -03:00
*/
2024-03-05 20:21:17 +08:00
if ( ! dev - > iommu - > attach_deferred & & ops - > release_domain )
ops - > release_domain - > ops - > attach_dev ( ops - > release_domain , dev ) ;
2023-06-05 21:59:43 -03:00
if ( ops - > release_device )
ops - > release_device ( dev ) ;
/*
* If this is the last driver to use the group then we must free the
* domains before we do the module_put ( ) .
*/
if ( list_empty ( & group - > devices ) ) {
if ( group - > default_domain ) {
iommu_domain_free ( group - > default_domain ) ;
group - > default_domain = NULL ;
}
if ( group - > blocking_domain ) {
iommu_domain_free ( group - > blocking_domain ) ;
group - > blocking_domain = NULL ;
}
group - > domain = NULL ;
}
/* Caller must put iommu_group */
dev - > iommu_group = NULL ;
module_put ( ops - > owner ) ;
dev_iommu_free ( dev ) ;
2025-04-10 12:23:48 +01:00
# ifdef CONFIG_IOMMU_DMA
dev - > dma_iommu = false ;
# endif
2018-11-30 10:31:59 +01:00
}
2025-03-21 10:19:24 -07:00
static struct iommu_domain * pasid_array_entry_to_domain ( void * entry )
{
if ( xa_pointer_tag ( entry ) = = IOMMU_PASID_ARRAY_DOMAIN )
return xa_untag_pointer ( entry ) ;
return ( ( struct iommu_attach_handle * ) xa_untag_pointer ( entry ) ) - > domain ;
}
2023-11-15 18:25:44 +00:00
DEFINE_MUTEX ( iommu_probe_device_lock ) ;
2020-04-29 15:36:47 +02:00
static int __iommu_probe_device ( struct device * dev , struct list_head * group_list )
2018-11-30 10:31:59 +01:00
{
2020-04-29 15:36:48 +02:00
struct iommu_group * group ;
2023-06-05 21:59:47 -03:00
struct group_device * gdev ;
2020-04-29 15:36:48 +02:00
int ret ;
2018-11-30 10:31:59 +01:00
2022-11-04 19:51:43 +00:00
/*
* Serialise to avoid races between IOMMU drivers registering in
* parallel and / or the " replay " calls from ACPI / OF code via client
* driver probe . Once the latter have been cleaned up we should
* probably be able to use device_lock ( ) here to minimise the scope ,
* but for now enforcing a simple global ordering is fine .
*/
2023-11-15 18:25:44 +00:00
lockdep_assert_held ( & iommu_probe_device_lock ) ;
2020-04-29 15:36:48 +02:00
2023-06-05 21:59:39 -03:00
/* Device is probed already if in a group */
2023-11-15 18:25:44 +00:00
if ( dev - > iommu_group )
return 0 ;
2020-05-25 15:01:22 +02:00
2025-02-28 15:46:31 +00:00
ret = iommu_init_device ( dev ) ;
2023-06-05 21:59:43 -03:00
if ( ret )
2023-11-15 18:25:44 +00:00
return ret ;
2025-03-12 15:01:31 +00:00
/*
* And if we do now see any replay calls , they would indicate someone
* misusing the dma_configure path outside bus code .
*/
if ( dev - > driver )
dev_WARN ( dev , " late IOMMU probe at driver bind, something fishy here! \n " ) ;
2020-04-29 15:36:45 +02:00
2023-06-05 21:59:43 -03:00
group = dev - > iommu_group ;
2023-06-05 21:59:47 -03:00
gdev = iommu_group_alloc_device ( group , dev ) ;
2021-08-10 10:14:00 +05:30
mutex_lock ( & group - > mutex ) ;
2023-06-05 21:59:47 -03:00
if ( IS_ERR ( gdev ) ) {
ret = PTR_ERR ( gdev ) ;
2023-06-05 21:59:41 -03:00
goto err_put_group ;
2023-06-05 21:59:47 -03:00
}
2020-04-29 15:36:48 +02:00
2023-06-05 21:59:48 -03:00
/*
* The gdev must be in the list before calling
* iommu_setup_default_domain ( )
*/
2023-06-05 21:59:47 -03:00
list_add_tail ( & gdev - > list , & group - > devices ) ;
2023-06-05 21:59:48 -03:00
WARN_ON ( group - > default_domain & & ! group - > domain ) ;
2023-05-11 01:42:12 -03:00
if ( group - > default_domain )
iommu_create_device_direct_mappings ( group - > default_domain , dev ) ;
2023-05-11 01:42:07 -03:00
if ( group - > domain ) {
2023-05-11 01:42:06 -03:00
ret = __iommu_device_set_domain ( group , dev , group - > domain , 0 ) ;
2023-05-11 01:42:12 -03:00
if ( ret )
2023-06-05 21:59:48 -03:00
goto err_remove_gdev ;
} else if ( ! group - > default_domain & & ! group_list ) {
2023-05-11 01:42:12 -03:00
ret = iommu_setup_default_domain ( group , 0 ) ;
if ( ret )
2023-06-05 21:59:48 -03:00
goto err_remove_gdev ;
} else if ( ! group - > default_domain ) {
/*
* With a group_list argument we defer the default_domain setup
* to the caller by providing a de - duplicated list of groups
* that need further setup .
*/
if ( list_empty ( & group - > entry ) )
list_add_tail ( & group - > entry , group_list ) ;
2020-11-19 16:58:46 +00:00
}
2020-04-29 15:36:48 +02:00
2024-04-19 17:54:45 +01:00
if ( group - > default_domain )
iommu_setup_dma_ops ( dev ) ;
mutex_unlock ( & group - > mutex ) ;
2020-04-29 15:36:48 +02:00
return 0 ;
2023-06-05 21:59:48 -03:00
err_remove_gdev :
list_del ( & gdev - > list ) ;
__iommu_group_free_device ( group , gdev ) ;
2023-06-05 21:59:41 -03:00
err_put_group :
2023-06-05 21:59:43 -03:00
iommu_deinit_device ( dev ) ;
2023-05-11 01:42:07 -03:00
mutex_unlock ( & group - > mutex ) ;
iommu_group_put ( group ) ;
2020-04-29 15:37:10 +02:00
2020-04-29 15:36:48 +02:00
return ret ;
2018-11-30 10:31:59 +01:00
}
2020-04-29 15:37:10 +02:00
int iommu_probe_device ( struct device * dev )
2018-11-30 10:31:59 +01:00
{
2022-06-21 16:14:25 +01:00
const struct iommu_ops * ops ;
2020-04-29 15:36:48 +02:00
int ret ;
2023-03-22 14:49:52 +08:00
2023-11-15 18:25:44 +00:00
mutex_lock ( & iommu_probe_device_lock ) ;
2020-04-29 15:36:48 +02:00
ret = __iommu_probe_device ( dev , NULL ) ;
2023-11-15 18:25:44 +00:00
mutex_unlock ( & iommu_probe_device_lock ) ;
2020-04-29 15:36:48 +02:00
if ( ret )
2023-06-05 21:59:48 -03:00
return ret ;
2023-03-22 14:49:52 +08:00
2022-06-21 16:14:25 +01:00
ops = dev_iommu_ops ( dev ) ;
2020-04-29 15:36:48 +02:00
if ( ops - > probe_finalize )
ops - > probe_finalize ( dev ) ;
return 0 ;
2023-03-22 14:49:52 +08:00
}
2023-06-05 21:59:42 -03:00
static void __iommu_group_free_device ( struct iommu_group * group ,
struct group_device * grp_dev )
2023-03-22 14:49:52 +08:00
{
struct device * dev = grp_dev - > dev ;
sysfs_remove_link ( group - > devices_kobj , grp_dev - > name ) ;
sysfs_remove_link ( & dev - > kobj , " iommu_group " ) ;
trace_remove_device_from_group ( group - > id , dev ) ;
2023-06-05 21:59:42 -03:00
/*
* If the group has become empty then ownership must have been
* released , and the current domain must be set back to NULL or
* the default domain .
*/
if ( list_empty ( & group - > devices ) )
WARN_ON ( group - > owner_cnt | |
group - > domain ! = group - > default_domain ) ;
2023-03-22 14:49:52 +08:00
kfree ( grp_dev - > name ) ;
kfree ( grp_dev ) ;
}
2023-06-05 21:59:43 -03:00
/* Remove the iommu_group from the struct device. */
2023-06-05 21:59:42 -03:00
static void __iommu_group_remove_device ( struct device * dev )
2018-11-30 10:31:59 +01:00
{
2023-03-22 14:49:53 +08:00
struct iommu_group * group = dev - > iommu_group ;
struct group_device * device ;
2020-04-29 15:36:45 +02:00
2023-03-22 14:49:53 +08:00
mutex_lock ( & group - > mutex ) ;
2023-06-05 21:59:42 -03:00
for_each_group_device ( group , device ) {
if ( device - > dev ! = dev )
continue ;
2023-03-22 14:49:53 +08:00
2023-06-05 21:59:42 -03:00
list_del ( & device - > list ) ;
__iommu_group_free_device ( group , device ) ;
2023-11-21 18:03:57 +00:00
if ( dev_has_iommu ( dev ) )
2023-06-05 21:59:43 -03:00
iommu_deinit_device ( dev ) ;
else
dev - > iommu_group = NULL ;
2023-06-05 21:59:46 -03:00
break ;
2023-06-05 21:59:42 -03:00
}
2023-06-05 21:59:43 -03:00
mutex_unlock ( & group - > mutex ) ;
2023-03-22 14:49:53 +08:00
/*
2023-06-05 21:59:47 -03:00
* Pairs with the get in iommu_init_device ( ) or
* iommu_group_add_device ( )
2023-03-22 14:49:53 +08:00
*/
2023-06-05 21:59:43 -03:00
iommu_group_put ( group ) ;
2023-06-05 21:59:42 -03:00
}
2023-03-22 14:49:53 +08:00
2023-06-05 21:59:42 -03:00
static void iommu_release_device ( struct device * dev )
{
struct iommu_group * group = dev - > iommu_group ;
2019-06-03 15:57:48 +01:00
2023-06-05 21:59:46 -03:00
if ( group )
__iommu_group_remove_device ( dev ) ;
2019-06-03 15:57:48 +01:00
2023-06-05 21:59:46 -03:00
/* Free any fwspec if no iommu_driver was ever attached */
if ( dev - > iommu )
dev_iommu_free ( dev ) ;
2018-11-30 10:31:59 +01:00
}
2015-05-28 18:41:29 +02:00
2017-01-05 18:38:26 +00:00
static int __init iommu_set_def_domain_type ( char * str )
{
bool pt ;
2018-05-14 19:22:25 +03:00
int ret ;
2017-01-05 18:38:26 +00:00
2018-05-14 19:22:25 +03:00
ret = kstrtobool ( str , & pt ) ;
if ( ret )
return ret ;
2017-01-05 18:38:26 +00:00
2019-08-19 15:22:48 +02:00
if ( pt )
iommu_set_default_passthrough ( true ) ;
else
iommu_set_default_translated ( true ) ;
2019-08-19 15:22:46 +02:00
2017-01-05 18:38:26 +00:00
return 0 ;
}
early_param ( " iommu.passthrough " , iommu_set_def_domain_type ) ;
2018-09-20 17:10:23 +01:00
static int __init iommu_dma_setup ( char * str )
{
2021-04-01 17:52:54 +02:00
int ret = kstrtobool ( str , & iommu_dma_strict ) ;
if ( ! ret )
iommu_cmd_line | = IOMMU_CMD_LINE_STRICT ;
return ret ;
2018-09-20 17:10:23 +01:00
}
early_param ( " iommu.strict " , iommu_dma_setup ) ;
2021-07-12 19:12:20 +08:00
void iommu_set_dma_strict ( void )
2021-04-01 17:52:54 +02:00
{
2021-07-12 19:12:20 +08:00
iommu_dma_strict = true ;
2021-08-11 13:21:34 +01:00
if ( iommu_def_domain_type = = IOMMU_DOMAIN_DMA_FQ )
iommu_def_domain_type = IOMMU_DOMAIN_DMA ;
2021-04-01 17:52:54 +02:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
static ssize_t iommu_group_attr_show ( struct kobject * kobj ,
struct attribute * __attr , char * buf )
2011-10-21 15:56:05 -04:00
{
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
struct iommu_group_attribute * attr = to_iommu_group_attr ( __attr ) ;
struct iommu_group * group = to_iommu_group ( kobj ) ;
ssize_t ret = - EIO ;
2011-10-21 15:56:05 -04:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
if ( attr - > show )
ret = attr - > show ( group , buf ) ;
return ret ;
}
static ssize_t iommu_group_attr_store ( struct kobject * kobj ,
struct attribute * __attr ,
const char * buf , size_t count )
{
struct iommu_group_attribute * attr = to_iommu_group_attr ( __attr ) ;
struct iommu_group * group = to_iommu_group ( kobj ) ;
ssize_t ret = - EIO ;
2011-10-21 15:56:05 -04:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
if ( attr - > store )
ret = attr - > store ( group , buf , count ) ;
return ret ;
2011-10-21 15:56:05 -04:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
static const struct sysfs_ops iommu_group_sysfs_ops = {
. show = iommu_group_attr_show ,
. store = iommu_group_attr_store ,
} ;
2011-10-21 15:56:05 -04:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
static int iommu_group_create_file ( struct iommu_group * group ,
struct iommu_group_attribute * attr )
{
return sysfs_create_file ( & group - > kobj , & attr - > attr ) ;
2011-10-21 15:56:05 -04:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
static void iommu_group_remove_file ( struct iommu_group * group ,
struct iommu_group_attribute * attr )
{
sysfs_remove_file ( & group - > kobj , & attr - > attr ) ;
}
static ssize_t iommu_group_show_name ( struct iommu_group * group , char * buf )
{
2023-03-22 20:34:21 +08:00
return sysfs_emit ( buf , " %s \n " , group - > name ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
}
2017-01-19 20:57:51 +00:00
/**
* iommu_insert_resv_region - Insert a new region in the
* list of reserved regions .
* @ new : new region to insert
* @ regions : list of regions
*
2019-08-21 14:09:40 +02:00
* Elements are sorted by start address and overlapping segments
* of the same type are merged .
2017-01-19 20:57:51 +00:00
*/
2020-07-13 22:25:42 +08:00
static int iommu_insert_resv_region ( struct iommu_resv_region * new ,
struct list_head * regions )
2017-01-19 20:57:51 +00:00
{
2019-08-21 14:09:40 +02:00
struct iommu_resv_region * iter , * tmp , * nr , * top ;
LIST_HEAD ( stack ) ;
nr = iommu_alloc_resv_region ( new - > start , new - > length ,
2022-10-19 08:44:44 +08:00
new - > prot , new - > type , GFP_KERNEL ) ;
2019-08-21 14:09:40 +02:00
if ( ! nr )
return - ENOMEM ;
/* First add the new element based on start address sorting */
list_for_each_entry ( iter , regions , list ) {
if ( nr - > start < iter - > start | |
( nr - > start = = iter - > start & & nr - > type < = iter - > type ) )
break ;
}
list_add_tail ( & nr - > list , & iter - > list ) ;
/* Merge overlapping segments of type nr->type in @regions, if any */
list_for_each_entry_safe ( iter , tmp , regions , list ) {
phys_addr_t top_end , iter_end = iter - > start + iter - > length - 1 ;
2019-11-26 18:54:13 +01:00
/* no merge needed on elements of different types than @new */
if ( iter - > type ! = new - > type ) {
2019-08-21 14:09:40 +02:00
list_move_tail ( & iter - > list , & stack ) ;
continue ;
}
/* look for the last stack element of same type as @iter */
list_for_each_entry_reverse ( top , & stack , list )
if ( top - > type = = iter - > type )
goto check_overlap ;
list_move_tail ( & iter - > list , & stack ) ;
continue ;
check_overlap :
top_end = top - > start + top - > length - 1 ;
if ( iter - > start > top_end + 1 ) {
list_move_tail ( & iter - > list , & stack ) ;
2017-01-19 20:57:51 +00:00
} else {
2019-08-21 14:09:40 +02:00
top - > length = max ( top_end , iter_end ) - top - > start + 1 ;
list_del ( & iter - > list ) ;
kfree ( iter ) ;
2017-01-19 20:57:51 +00:00
}
}
2019-08-21 14:09:40 +02:00
list_splice ( & stack , regions ) ;
2017-01-19 20:57:51 +00:00
return 0 ;
}
static int
iommu_insert_device_resv_regions ( struct list_head * dev_resv_regions ,
struct list_head * group_resv_regions )
{
struct iommu_resv_region * entry ;
2017-02-06 10:11:38 +01:00
int ret = 0 ;
2017-01-19 20:57:51 +00:00
list_for_each_entry ( entry , dev_resv_regions , list ) {
ret = iommu_insert_resv_region ( entry , group_resv_regions ) ;
if ( ret )
break ;
}
return ret ;
}
int iommu_get_group_resv_regions ( struct iommu_group * group ,
struct list_head * head )
{
2017-02-10 15:13:10 +01:00
struct group_device * device ;
2017-01-19 20:57:51 +00:00
int ret = 0 ;
mutex_lock ( & group - > mutex ) ;
2023-05-11 01:42:00 -03:00
for_each_group_device ( group , device ) {
2017-01-19 20:57:51 +00:00
struct list_head dev_resv_regions ;
2022-05-04 13:39:58 +01:00
/*
* Non - API groups still expose reserved_regions in sysfs ,
* so filter out calls that get here that way .
*/
2023-11-21 18:03:57 +00:00
if ( ! dev_has_iommu ( device - > dev ) )
2022-05-04 13:39:58 +01:00
break ;
2017-01-19 20:57:51 +00:00
INIT_LIST_HEAD ( & dev_resv_regions ) ;
iommu_get_resv_regions ( device - > dev , & dev_resv_regions ) ;
ret = iommu_insert_device_resv_regions ( & dev_resv_regions , head ) ;
iommu_put_resv_regions ( device - > dev , & dev_resv_regions ) ;
if ( ret )
break ;
}
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_get_group_resv_regions ) ;
2017-01-19 20:57:52 +00:00
static ssize_t iommu_group_show_resv_regions ( struct iommu_group * group ,
char * buf )
{
struct iommu_resv_region * region , * next ;
struct list_head group_resv_regions ;
2023-03-22 20:34:21 +08:00
int offset = 0 ;
2017-01-19 20:57:52 +00:00
INIT_LIST_HEAD ( & group_resv_regions ) ;
iommu_get_group_resv_regions ( group , & group_resv_regions ) ;
list_for_each_entry_safe ( region , next , & group_resv_regions , list ) {
2023-03-22 20:34:21 +08:00
offset + = sysfs_emit_at ( buf , offset , " 0x%016llx 0x%016llx %s \n " ,
( long long ) region - > start ,
( long long ) ( region - > start +
region - > length - 1 ) ,
iommu_group_resv_type_string [ region - > type ] ) ;
2017-01-19 20:57:52 +00:00
kfree ( region ) ;
}
2023-03-22 20:34:21 +08:00
return offset ;
2017-01-19 20:57:52 +00:00
}
2018-07-11 13:59:36 -07:00
static ssize_t iommu_group_show_type ( struct iommu_group * group ,
char * buf )
{
2023-03-22 20:34:21 +08:00
char * type = " unknown " ;
2018-07-11 13:59:36 -07:00
2020-11-24 21:06:03 +08:00
mutex_lock ( & group - > mutex ) ;
2018-07-11 13:59:36 -07:00
if ( group - > default_domain ) {
switch ( group - > default_domain - > type ) {
case IOMMU_DOMAIN_BLOCKED :
2023-03-22 20:34:21 +08:00
type = " blocked " ;
2018-07-11 13:59:36 -07:00
break ;
case IOMMU_DOMAIN_IDENTITY :
2023-03-22 20:34:21 +08:00
type = " identity " ;
2018-07-11 13:59:36 -07:00
break ;
case IOMMU_DOMAIN_UNMANAGED :
2023-03-22 20:34:21 +08:00
type = " unmanaged " ;
2018-07-11 13:59:36 -07:00
break ;
case IOMMU_DOMAIN_DMA :
2023-03-22 20:34:21 +08:00
type = " DMA " ;
2018-07-11 13:59:36 -07:00
break ;
2021-08-11 13:21:30 +01:00
case IOMMU_DOMAIN_DMA_FQ :
2023-03-22 20:34:21 +08:00
type = " DMA-FQ " ;
2021-08-11 13:21:30 +01:00
break ;
2018-07-11 13:59:36 -07:00
}
}
2020-11-24 21:06:03 +08:00
mutex_unlock ( & group - > mutex ) ;
2018-07-11 13:59:36 -07:00
2023-03-22 20:34:21 +08:00
return sysfs_emit ( buf , " %s \n " , type ) ;
2018-07-11 13:59:36 -07:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
static IOMMU_GROUP_ATTR ( name , S_IRUGO , iommu_group_show_name , NULL ) ;
2017-01-19 20:57:52 +00:00
static IOMMU_GROUP_ATTR ( reserved_regions , 0444 ,
iommu_group_show_resv_regions , NULL ) ;
2020-11-24 21:06:02 +08:00
static IOMMU_GROUP_ATTR ( type , 0644 , iommu_group_show_type ,
iommu_group_store_type ) ;
2018-07-11 13:59:36 -07:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
static void iommu_group_release ( struct kobject * kobj )
{
struct iommu_group * group = to_iommu_group ( kobj ) ;
2015-05-28 18:41:25 +02:00
pr_debug ( " Releasing group %d \n " , group - > id ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
if ( group - > iommu_data_release )
group - > iommu_data_release ( group - > iommu_data ) ;
2022-06-08 02:16:55 +00:00
ida_free ( & iommu_group_ida , group - > id ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
2023-06-05 21:59:43 -03:00
/* Domains are free'd by iommu_deinit_device() */
WARN_ON ( group - > default_domain ) ;
WARN_ON ( group - > blocking_domain ) ;
2015-05-28 18:41:29 +02:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
kfree ( group - > name ) ;
kfree ( group ) ;
}
2023-02-14 03:25:53 +00:00
static const struct kobj_type iommu_group_ktype = {
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
. sysfs_ops = & iommu_group_sysfs_ops ,
. release = iommu_group_release ,
} ;
/**
* iommu_group_alloc - Allocate a new group
*
* This function is called by an iommu driver to allocate a new iommu
* group . The iommu group represents the minimum granularity of the iommu .
* Upon successful return , the caller holds a reference to the supplied
* group in order to hold the group until devices are added . Use
* iommu_group_put ( ) to release this extra reference count , allowing the
* group to be automatically reclaimed once it has no devices or external
* references .
*/
struct iommu_group * iommu_group_alloc ( void )
2011-10-21 15:56:05 -04:00
{
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
struct iommu_group * group ;
int ret ;
group = kzalloc ( sizeof ( * group ) , GFP_KERNEL ) ;
if ( ! group )
return ERR_PTR ( - ENOMEM ) ;
group - > kobj . kset = iommu_group_kset ;
mutex_init ( & group - > mutex ) ;
INIT_LIST_HEAD ( & group - > devices ) ;
2020-04-29 15:36:47 +02:00
INIT_LIST_HEAD ( & group - > entry ) ;
2022-10-31 08:59:09 +08:00
xa_init ( & group - > pasid_array ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
2022-06-08 02:16:55 +00:00
ret = ida_alloc ( & iommu_group_ida , GFP_KERNEL ) ;
2016-06-29 21:13:59 +02:00
if ( ret < 0 ) {
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
kfree ( group ) ;
2016-06-29 21:13:59 +02:00
return ERR_PTR ( ret ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
}
2016-06-29 21:13:59 +02:00
group - > id = ret ;
2011-10-21 15:56:05 -04:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
ret = kobject_init_and_add ( & group - > kobj , & iommu_group_ktype ,
NULL , " %d " , group - > id ) ;
if ( ret ) {
2020-05-27 16:00:19 -05:00
kobject_put ( & group - > kobj ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
return ERR_PTR ( ret ) ;
}
group - > devices_kobj = kobject_create_and_add ( " devices " , & group - > kobj ) ;
if ( ! group - > devices_kobj ) {
kobject_put ( & group - > kobj ) ; /* triggers .release & free */
return ERR_PTR ( - ENOMEM ) ;
}
/*
* The devices_kobj holds a reference on the group kobject , so
* as long as that exists so will the group . We can therefore
* use the devices_kobj for reference counting .
*/
kobject_put ( & group - > kobj ) ;
2017-01-19 20:57:52 +00:00
ret = iommu_group_create_file ( group ,
& iommu_group_attr_reserved_regions ) ;
2023-02-15 21:21:16 -04:00
if ( ret ) {
kobject_put ( group - > devices_kobj ) ;
2017-01-19 20:57:52 +00:00
return ERR_PTR ( ret ) ;
2023-02-15 21:21:16 -04:00
}
2017-01-19 20:57:52 +00:00
2018-07-11 13:59:36 -07:00
ret = iommu_group_create_file ( group , & iommu_group_attr_type ) ;
2023-02-15 21:21:16 -04:00
if ( ret ) {
kobject_put ( group - > devices_kobj ) ;
2018-07-11 13:59:36 -07:00
return ERR_PTR ( ret ) ;
2023-02-15 21:21:16 -04:00
}
2018-07-11 13:59:36 -07:00
2015-05-28 18:41:25 +02:00
pr_debug ( " Allocated group %d \n " , group - > id ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
return group ;
}
EXPORT_SYMBOL_GPL ( iommu_group_alloc ) ;
/**
* iommu_group_get_iommudata - retrieve iommu_data registered for a group
* @ group : the group
*
* iommu drivers can store data in the group for use when doing iommu
* operations . This function provides a way to retrieve it . Caller
* should hold a group reference .
*/
void * iommu_group_get_iommudata ( struct iommu_group * group )
{
return group - > iommu_data ;
}
EXPORT_SYMBOL_GPL ( iommu_group_get_iommudata ) ;
/**
* iommu_group_set_iommudata - set iommu_data for a group
* @ group : the group
* @ iommu_data : new data
* @ release : release function for iommu_data
*
* iommu drivers can store data in the group for use when doing iommu
* operations . This function provides a way to set the data after
* the group has been allocated . Caller should hold a group reference .
*/
void iommu_group_set_iommudata ( struct iommu_group * group , void * iommu_data ,
void ( * release ) ( void * iommu_data ) )
2011-10-21 15:56:05 -04:00
{
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
group - > iommu_data = iommu_data ;
group - > iommu_data_release = release ;
}
EXPORT_SYMBOL_GPL ( iommu_group_set_iommudata ) ;
2011-10-21 15:56:05 -04:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
/**
* iommu_group_set_name - set name for a group
* @ group : the group
* @ name : name
*
* Allow iommu driver to set a name for a group . When set it will
* appear in a name attribute file under the group in sysfs .
*/
int iommu_group_set_name ( struct iommu_group * group , const char * name )
{
int ret ;
if ( group - > name ) {
iommu_group_remove_file ( group , & iommu_group_attr_name ) ;
kfree ( group - > name ) ;
group - > name = NULL ;
if ( ! name )
return 0 ;
}
group - > name = kstrdup ( name , GFP_KERNEL ) ;
if ( ! group - > name )
return - ENOMEM ;
ret = iommu_group_create_file ( group , & iommu_group_attr_name ) ;
if ( ret ) {
kfree ( group - > name ) ;
group - > name = NULL ;
return ret ;
}
2011-10-21 15:56:05 -04:00
return 0 ;
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
EXPORT_SYMBOL_GPL ( iommu_group_set_name ) ;
2011-10-21 15:56:05 -04:00
2023-05-11 01:42:12 -03:00
static int iommu_create_device_direct_mappings ( struct iommu_domain * domain ,
2020-04-29 15:36:50 +02:00
struct device * dev )
2015-05-28 18:41:34 +02:00
{
2017-01-19 20:57:47 +00:00
struct iommu_resv_region * entry ;
2015-05-28 18:41:34 +02:00
struct list_head mappings ;
unsigned long pg_size ;
int ret = 0 ;
2023-08-09 20:48:02 +08:00
pg_size = domain - > pgsize_bitmap ? 1UL < < __ffs ( domain - > pgsize_bitmap ) : 0 ;
2015-05-28 18:41:34 +02:00
INIT_LIST_HEAD ( & mappings ) ;
2023-08-09 20:48:02 +08:00
if ( WARN_ON_ONCE ( iommu_is_dma_domain ( domain ) & & ! pg_size ) )
return - EINVAL ;
2017-01-19 20:57:47 +00:00
iommu_get_resv_regions ( dev , & mappings ) ;
2015-05-28 18:41:34 +02:00
/* We need to consider overlapping regions for different devices */
list_for_each_entry ( entry , & mappings , list ) {
dma_addr_t start , end , addr ;
2020-12-07 17:35:53 +08:00
size_t map_size = 0 ;
2015-05-28 18:41:34 +02:00
2023-08-09 20:48:02 +08:00
if ( entry - > type = = IOMMU_RESV_DIRECT )
dev - > iommu - > require_direct = 1 ;
2015-05-28 18:41:34 +02:00
2023-08-09 20:48:02 +08:00
if ( ( entry - > type ! = IOMMU_RESV_DIRECT & &
entry - > type ! = IOMMU_RESV_DIRECT_RELAXABLE ) | |
! iommu_is_dma_domain ( domain ) )
2017-01-19 20:57:50 +00:00
continue ;
2023-08-09 20:48:02 +08:00
start = ALIGN ( entry - > start , pg_size ) ;
end = ALIGN ( entry - > start + entry - > length , pg_size ) ;
2020-12-07 17:35:53 +08:00
for ( addr = start ; addr < = end ; addr + = pg_size ) {
2015-05-28 18:41:34 +02:00
phys_addr_t phys_addr ;
2020-12-07 17:35:53 +08:00
if ( addr = = end )
goto map_end ;
2015-05-28 18:41:34 +02:00
phys_addr = iommu_iova_to_phys ( domain , addr ) ;
2020-12-07 17:35:53 +08:00
if ( ! phys_addr ) {
map_size + = pg_size ;
2015-05-28 18:41:34 +02:00
continue ;
2020-12-07 17:35:53 +08:00
}
2015-05-28 18:41:34 +02:00
2020-12-07 17:35:53 +08:00
map_end :
if ( map_size ) {
ret = iommu_map ( domain , addr - map_size ,
addr - map_size , map_size ,
2023-01-23 16:35:54 -04:00
entry - > prot , GFP_KERNEL ) ;
2020-12-07 17:35:53 +08:00
if ( ret )
goto out ;
map_size = 0 ;
}
2015-05-28 18:41:34 +02:00
}
}
out :
2017-01-19 20:57:47 +00:00
iommu_put_resv_regions ( dev , & mappings ) ;
2015-05-28 18:41:34 +02:00
return ret ;
}
2023-06-05 21:59:47 -03:00
/* This is undone by __iommu_group_free_device() */
static struct group_device * iommu_group_alloc_device ( struct iommu_group * group ,
struct device * dev )
2011-10-21 15:56:05 -04:00
{
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
int ret , i = 0 ;
2017-02-01 12:19:46 +01:00
struct group_device * device ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
device = kzalloc ( sizeof ( * device ) , GFP_KERNEL ) ;
if ( ! device )
2023-06-05 21:59:47 -03:00
return ERR_PTR ( - ENOMEM ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
device - > dev = dev ;
2011-10-21 15:56:05 -04:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
ret = sysfs_create_link ( & dev - > kobj , & group - > kobj , " iommu_group " ) ;
2017-01-16 12:58:07 +00:00
if ( ret )
goto err_free_device ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
device - > name = kasprintf ( GFP_KERNEL , " %s " , kobject_name ( & dev - > kobj ) ) ;
rename :
if ( ! device - > name ) {
2017-01-16 12:58:07 +00:00
ret = - ENOMEM ;
goto err_remove_link ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
}
2011-10-21 15:56:05 -04:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
ret = sysfs_create_link_nowarn ( group - > devices_kobj ,
& dev - > kobj , device - > name ) ;
if ( ret ) {
if ( ret = = - EEXIST & & i > = 0 ) {
/*
* Account for the slim chance of collision
* and append an instance to the name .
*/
2017-01-16 12:58:07 +00:00
kfree ( device - > name ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
device - > name = kasprintf ( GFP_KERNEL , " %s.%d " ,
kobject_name ( & dev - > kobj ) , i + + ) ;
goto rename ;
}
2017-01-16 12:58:07 +00:00
goto err_free_name ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
}
2013-08-15 11:59:24 -06:00
trace_add_device_to_group ( group - > id , dev ) ;
2015-05-28 18:41:25 +02:00
2019-02-08 16:05:45 -06:00
dev_info ( dev , " Adding to iommu group %d \n " , group - > id ) ;
2015-05-28 18:41:25 +02:00
2023-06-05 21:59:47 -03:00
return device ;
2017-01-16 12:58:07 +00:00
err_free_name :
kfree ( device - > name ) ;
err_remove_link :
sysfs_remove_link ( & dev - > kobj , " iommu_group " ) ;
err_free_device :
kfree ( device ) ;
2019-02-08 16:05:45 -06:00
dev_err ( dev , " Failed to add to iommu group %d: %d \n " , group - > id , ret ) ;
2023-06-05 21:59:47 -03:00
return ERR_PTR ( ret ) ;
}
/**
* iommu_group_add_device - add a device to an iommu group
* @ group : the group into which to add the device ( reference should be held )
* @ dev : the device
*
* This function is called by an iommu driver to add a device into a
* group . Adding a device increments the group reference count .
*/
int iommu_group_add_device ( struct iommu_group * group , struct device * dev )
{
struct group_device * gdev ;
gdev = iommu_group_alloc_device ( group , dev ) ;
if ( IS_ERR ( gdev ) )
return PTR_ERR ( gdev ) ;
iommu_group_ref_get ( group ) ;
dev - > iommu_group = group ;
mutex_lock ( & group - > mutex ) ;
list_add_tail ( & gdev - > list , & group - > devices ) ;
mutex_unlock ( & group - > mutex ) ;
return 0 ;
2011-10-21 15:56:05 -04:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
EXPORT_SYMBOL_GPL ( iommu_group_add_device ) ;
2011-10-21 15:56:05 -04:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
/**
* iommu_group_remove_device - remove a device from it ' s current group
* @ dev : device to be removed
*
* This function is called by an iommu driver to remove the device from
* it ' s current group . This decrements the iommu group reference count .
*/
void iommu_group_remove_device ( struct device * dev )
{
struct iommu_group * group = dev - > iommu_group ;
2021-07-31 09:47:37 +02:00
if ( ! group )
return ;
2019-02-08 16:05:45 -06:00
dev_info ( dev , " Removing from iommu group %d \n " , group - > id ) ;
2015-05-28 18:41:25 +02:00
2023-06-05 21:59:42 -03:00
__iommu_group_remove_device ( dev ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
}
EXPORT_SYMBOL_GPL ( iommu_group_remove_device ) ;
2024-02-05 11:56:07 +00:00
# if IS_ENABLED(CONFIG_LOCKDEP) && IS_ENABLED(CONFIG_IOMMU_API)
/**
* iommu_group_mutex_assert - Check device group mutex lock
* @ dev : the device that has group param set
*
* This function is called by an iommu driver to check whether it holds
* group mutex lock for the given device or not .
*
* Note that this function must be called after device group param is set .
*/
void iommu_group_mutex_assert ( struct device * dev )
{
struct iommu_group * group = dev - > iommu_group ;
lockdep_assert_held ( & group - > mutex ) ;
}
EXPORT_SYMBOL_GPL ( iommu_group_mutex_assert ) ;
# endif
2023-11-21 18:03:57 +00:00
static struct device * iommu_group_first_dev ( struct iommu_group * group )
{
lockdep_assert_held ( & group - > mutex ) ;
return list_first_entry ( & group - > devices , struct group_device , list ) - > dev ;
}
2022-01-28 18:44:33 +08:00
/**
* iommu_group_for_each_dev - iterate over each device in the group
* @ group : the group
* @ data : caller opaque data to be passed to callback function
* @ fn : caller supplied callback function
*
* This function is called by group users to iterate over group devices .
* Callers should hold a reference count to the group during callback .
* The group - > mutex is held across callbacks , which will block calls to
* iommu_group_add / remove_device .
*/
2015-05-28 18:41:31 +02:00
int iommu_group_for_each_dev ( struct iommu_group * group , void * data ,
int ( * fn ) ( struct device * , void * ) )
{
2023-05-11 01:42:14 -03:00
struct group_device * device ;
int ret = 0 ;
2015-05-28 18:41:31 +02:00
mutex_lock ( & group - > mutex ) ;
2023-05-11 01:42:14 -03:00
for_each_group_device ( group , device ) {
ret = fn ( device - > dev , data ) ;
if ( ret )
break ;
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
mutex_unlock ( & group - > mutex ) ;
2015-05-28 18:41:31 +02:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_group_for_each_dev ) ;
/**
* iommu_group_get - Return the group for a device and increment reference
* @ dev : get the group that this device belongs to
*
* This function is called by iommu drivers and users to get the group
* for the specified device . If found , the group is returned and the group
* reference in incremented , else NULL .
*/
struct iommu_group * iommu_group_get ( struct device * dev )
{
struct iommu_group * group = dev - > iommu_group ;
if ( group )
kobject_get ( group - > devices_kobj ) ;
return group ;
}
EXPORT_SYMBOL_GPL ( iommu_group_get ) ;
2016-11-11 17:59:21 +00:00
/**
* iommu_group_ref_get - Increment reference on a group
* @ group : the group to use , must not be NULL
*
* This function is called by iommu drivers to take additional references on an
* existing group . Returns the given group for convenience .
*/
struct iommu_group * iommu_group_ref_get ( struct iommu_group * group )
{
kobject_get ( group - > devices_kobj ) ;
return group ;
}
2019-12-19 12:03:37 +00:00
EXPORT_SYMBOL_GPL ( iommu_group_ref_get ) ;
2016-11-11 17:59:21 +00:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
/**
* iommu_group_put - Decrement group reference
* @ group : the group to use
*
* This function is called by iommu drivers and users to release the
* iommu group . Once the reference count is zero , the group is released .
*/
void iommu_group_put ( struct iommu_group * group )
{
if ( group )
kobject_put ( group - > devices_kobj ) ;
}
EXPORT_SYMBOL_GPL ( iommu_group_put ) ;
/**
* iommu_group_id - Return ID for a group
* @ group : the group to ID
*
* Return the unique ID for the group matching the sysfs group number .
*/
int iommu_group_id ( struct iommu_group * group )
{
return group - > id ;
}
EXPORT_SYMBOL_GPL ( iommu_group_id ) ;
2011-10-21 15:56:05 -04:00
2014-09-19 10:03:06 -06:00
static struct iommu_group * get_pci_alias_group ( struct pci_dev * pdev ,
unsigned long * devfns ) ;
2014-07-03 09:51:18 -06:00
/*
* To consider a PCI device isolated , we require ACS to support Source
* Validation , Request Redirection , Completer Redirection , and Upstream
* Forwarding . This effectively means that devices cannot spoof their
* requester ID , requests and completions cannot be redirected , and all
* transactions are forwarded upstream , even as it passes through a
* bridge where the target device is downstream .
*/
# define REQ_ACS_FLAGS (PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF)
2014-09-19 10:03:06 -06:00
/*
* For multifunction devices which are not isolated from each other , find
* all the other non - isolated functions and look for existing groups . For
* each function , we also need to look for aliases to or from other devices
* that may already have a group .
*/
static struct iommu_group * get_pci_function_alias_group ( struct pci_dev * pdev ,
unsigned long * devfns )
{
struct pci_dev * tmp = NULL ;
struct iommu_group * group ;
if ( ! pdev - > multifunction | | pci_acs_enabled ( pdev , REQ_ACS_FLAGS ) )
return NULL ;
for_each_pci_dev ( tmp ) {
if ( tmp = = pdev | | tmp - > bus ! = pdev - > bus | |
PCI_SLOT ( tmp - > devfn ) ! = PCI_SLOT ( pdev - > devfn ) | |
pci_acs_enabled ( tmp , REQ_ACS_FLAGS ) )
continue ;
group = get_pci_alias_group ( tmp , devfns ) ;
if ( group ) {
pci_dev_put ( tmp ) ;
return group ;
}
}
return NULL ;
}
/*
2016-03-03 15:38:02 +01:00
* Look for aliases to or from the given device for existing groups . DMA
* aliases are only supported on the same bus , therefore the search
2014-09-19 10:03:06 -06:00
* space is quite small ( especially since we ' re really only looking at pcie
* device , and therefore only expect multiple slots on the root complex or
* downstream switch ports ) . It ' s conceivable though that a pair of
* multifunction devices could have aliases between them that would cause a
* loop . To prevent this , we use a bitmap to track where we ' ve been .
*/
static struct iommu_group * get_pci_alias_group ( struct pci_dev * pdev ,
unsigned long * devfns )
{
struct pci_dev * tmp = NULL ;
struct iommu_group * group ;
if ( test_and_set_bit ( pdev - > devfn & 0xff , devfns ) )
return NULL ;
group = iommu_group_get ( & pdev - > dev ) ;
if ( group )
return group ;
for_each_pci_dev ( tmp ) {
if ( tmp = = pdev | | tmp - > bus ! = pdev - > bus )
continue ;
/* We alias them or they alias us */
2016-03-03 15:38:02 +01:00
if ( pci_devs_are_dma_aliases ( pdev , tmp ) ) {
2014-09-19 10:03:06 -06:00
group = get_pci_alias_group ( tmp , devfns ) ;
if ( group ) {
pci_dev_put ( tmp ) ;
return group ;
}
group = get_pci_function_alias_group ( tmp , devfns ) ;
if ( group ) {
pci_dev_put ( tmp ) ;
return group ;
}
}
}
return NULL ;
}
2014-07-03 09:51:18 -06:00
struct group_for_pci_data {
struct pci_dev * pdev ;
struct iommu_group * group ;
} ;
/*
* DMA alias iterator callback , return the last seen device . Stop and return
* the IOMMU group if we find one along the way .
*/
static int get_pci_alias_or_group ( struct pci_dev * pdev , u16 alias , void * opaque )
{
struct group_for_pci_data * data = opaque ;
data - > pdev = pdev ;
data - > group = iommu_group_get ( & pdev - > dev ) ;
return data - > group ! = NULL ;
}
2015-10-21 23:51:38 +02:00
/*
* Generic device_group call - back function . It just allocates one
* iommu - group per device .
*/
struct iommu_group * generic_device_group ( struct device * dev )
{
2017-06-28 12:45:31 +02:00
return iommu_group_alloc ( ) ;
2015-10-21 23:51:38 +02:00
}
2019-12-19 12:03:37 +00:00
EXPORT_SYMBOL_GPL ( generic_device_group ) ;
2015-10-21 23:51:38 +02:00
2023-08-22 13:15:57 -03:00
/*
* Generic device_group call - back function . It just allocates one
* iommu - group per iommu driver instance shared by every device
* probed by that iommu driver .
*/
struct iommu_group * generic_single_device_group ( struct device * dev )
{
struct iommu_device * iommu = dev - > iommu - > iommu_dev ;
if ( ! iommu - > singleton_group ) {
struct iommu_group * group ;
group = iommu_group_alloc ( ) ;
if ( IS_ERR ( group ) )
return group ;
iommu - > singleton_group = group ;
}
return iommu_group_ref_get ( iommu - > singleton_group ) ;
}
EXPORT_SYMBOL_GPL ( generic_single_device_group ) ;
2014-07-03 09:51:18 -06:00
/*
* Use standard PCI bus topology , isolation features , and DMA alias quirks
* to find or create an IOMMU group for a device .
*/
2015-10-21 23:51:37 +02:00
struct iommu_group * pci_device_group ( struct device * dev )
2014-07-03 09:51:18 -06:00
{
2015-10-21 23:51:37 +02:00
struct pci_dev * pdev = to_pci_dev ( dev ) ;
2014-07-03 09:51:18 -06:00
struct group_for_pci_data data ;
struct pci_bus * bus ;
struct iommu_group * group = NULL ;
2014-09-19 10:03:06 -06:00
u64 devfns [ 4 ] = { 0 } ;
2014-07-03 09:51:18 -06:00
2015-10-21 23:51:37 +02:00
if ( WARN_ON ( ! dev_is_pci ( dev ) ) )
return ERR_PTR ( - EINVAL ) ;
2014-07-03 09:51:18 -06:00
/*
* Find the upstream DMA alias for the device . A device must not
* be aliased due to topology in order to have its own IOMMU group .
* If we find an alias along the way that already belongs to a
* group , use it .
*/
if ( pci_for_each_dma_alias ( pdev , get_pci_alias_or_group , & data ) )
return data . group ;
pdev = data . pdev ;
/*
* Continue upstream from the point of minimum IOMMU granularity
* due to aliases to the point where devices are protected from
* peer - to - peer DMA by PCI ACS . Again , if we find an existing
* group , use it .
*/
for ( bus = pdev - > bus ; ! pci_is_root_bus ( bus ) ; bus = bus - > parent ) {
if ( ! bus - > self )
continue ;
if ( pci_acs_path_enabled ( bus - > self , NULL , REQ_ACS_FLAGS ) )
break ;
pdev = bus - > self ;
group = iommu_group_get ( & pdev - > dev ) ;
if ( group )
return group ;
}
/*
2014-09-19 10:03:06 -06:00
* Look for existing groups on device aliases . If we alias another
* device or another device aliases us , use the same group .
2014-07-03 09:51:18 -06:00
*/
2014-09-19 10:03:06 -06:00
group = get_pci_alias_group ( pdev , ( unsigned long * ) devfns ) ;
if ( group )
return group ;
2014-07-03 09:51:18 -06:00
/*
2014-09-19 10:03:06 -06:00
* Look for existing groups on non - isolated functions on the same
* slot and aliases of those funcions , if any . No need to clear
* the search bitmap , the tested devfns are still valid .
2014-07-03 09:51:18 -06:00
*/
2014-09-19 10:03:06 -06:00
group = get_pci_function_alias_group ( pdev , ( unsigned long * ) devfns ) ;
if ( group )
return group ;
2014-07-03 09:51:18 -06:00
/* No shared group found, allocate new */
2017-06-28 12:45:31 +02:00
return iommu_group_alloc ( ) ;
2014-07-03 09:51:18 -06:00
}
2019-12-19 12:03:37 +00:00
EXPORT_SYMBOL_GPL ( pci_device_group ) ;
2014-07-03 09:51:18 -06:00
2018-09-10 19:19:18 +05:30
/* Get the IOMMU group for device on fsl-mc bus */
struct iommu_group * fsl_mc_device_group ( struct device * dev )
{
struct device * cont_dev = fsl_mc_cont_dev ( dev ) ;
struct iommu_group * group ;
group = iommu_group_get ( cont_dev ) ;
if ( ! group )
group = iommu_group_alloc ( ) ;
return group ;
}
2019-12-19 12:03:37 +00:00
EXPORT_SYMBOL_GPL ( fsl_mc_device_group ) ;
2018-09-10 19:19:18 +05:30
2024-10-28 09:38:10 +00:00
static struct iommu_domain * __iommu_alloc_identity_domain ( struct device * dev )
{
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
struct iommu_domain * domain ;
if ( ops - > identity_domain )
return ops - > identity_domain ;
2025-04-08 13:35:48 -03:00
if ( ops - > domain_alloc_identity ) {
domain = ops - > domain_alloc_identity ( dev ) ;
if ( IS_ERR ( domain ) )
return domain ;
} else {
2024-10-28 09:38:10 +00:00
return ERR_PTR ( - EOPNOTSUPP ) ;
2025-04-08 13:35:48 -03:00
}
2024-10-28 09:38:10 +00:00
iommu_domain_init ( domain , IOMMU_DOMAIN_IDENTITY , ops ) ;
return domain ;
}
2023-05-11 01:42:12 -03:00
static struct iommu_domain *
2023-09-13 10:43:54 -03:00
__iommu_group_alloc_default_domain ( struct iommu_group * group , int req_type )
2020-04-29 15:36:39 +02:00
{
2024-10-28 09:38:01 +00:00
struct device * dev = iommu_group_first_dev ( group ) ;
struct iommu_domain * dom ;
2023-05-11 01:42:12 -03:00
if ( group - > default_domain & & group - > default_domain - > type = = req_type )
return group - > default_domain ;
2024-10-28 09:38:01 +00:00
/*
* When allocating the DMA API domain assume that the driver is going to
* use PASID and make sure the RID ' s domain is PASID compatible .
*/
if ( req_type & __IOMMU_DOMAIN_PAGING ) {
dom = __iommu_paging_domain_alloc_flags ( dev , req_type ,
dev - > iommu - > max_pasids ? IOMMU_HWPT_ALLOC_PASID : 0 ) ;
/*
* If driver does not support PASID feature then
* try to allocate non - PASID domain
*/
if ( PTR_ERR ( dom ) = = - EOPNOTSUPP )
dom = __iommu_paging_domain_alloc_flags ( dev , req_type , 0 ) ;
return dom ;
}
2024-10-28 09:38:10 +00:00
if ( req_type = = IOMMU_DOMAIN_IDENTITY )
return __iommu_alloc_identity_domain ( dev ) ;
return ERR_PTR ( - EINVAL ) ;
2020-04-29 15:36:39 +02:00
}
2023-05-11 01:42:11 -03:00
/*
* req_type of 0 means " auto " which means to select a domain based on
* iommu_def_domain_type or what the driver actually supports .
*/
static struct iommu_domain *
iommu_group_alloc_default_domain ( struct iommu_group * group , int req_type )
2020-04-29 15:36:46 +02:00
{
2023-11-21 18:03:57 +00:00
const struct iommu_ops * ops = dev_iommu_ops ( iommu_group_first_dev ( group ) ) ;
2020-04-29 15:36:39 +02:00
struct iommu_domain * dom ;
2020-04-29 15:36:46 +02:00
2023-05-11 01:42:11 -03:00
lockdep_assert_held ( & group - > mutex ) ;
2020-04-29 15:36:39 +02:00
2023-09-13 10:43:35 -03:00
/*
* Allow legacy drivers to specify the domain that will be the default
* domain . This should always be either an IDENTITY / BLOCKED / PLATFORM
* domain . Do not use in new drivers .
*/
2023-09-13 10:43:54 -03:00
if ( ops - > default_domain ) {
2024-01-30 12:12:53 -04:00
if ( req_type ! = ops - > default_domain - > type )
2023-11-01 20:28:11 -03:00
return ERR_PTR ( - EINVAL ) ;
2023-09-13 10:43:54 -03:00
return ops - > default_domain ;
2023-09-13 10:43:35 -03:00
}
2023-05-11 01:42:11 -03:00
if ( req_type )
2023-09-13 10:43:54 -03:00
return __iommu_group_alloc_default_domain ( group , req_type ) ;
2020-04-29 15:36:46 +02:00
2023-05-11 01:42:11 -03:00
/* The driver gave no guidance on what type to use, try the default */
2023-09-13 10:43:54 -03:00
dom = __iommu_group_alloc_default_domain ( group , iommu_def_domain_type ) ;
2023-11-01 20:28:11 -03:00
if ( ! IS_ERR ( dom ) )
2023-05-11 01:42:11 -03:00
return dom ;
2020-04-29 15:36:46 +02:00
2023-05-11 01:42:11 -03:00
/* Otherwise IDENTITY and DMA_FQ defaults will try DMA */
if ( iommu_def_domain_type = = IOMMU_DOMAIN_DMA )
2023-11-01 20:28:11 -03:00
return ERR_PTR ( - EINVAL ) ;
2023-09-13 10:43:54 -03:00
dom = __iommu_group_alloc_default_domain ( group , IOMMU_DOMAIN_DMA ) ;
2023-11-01 20:28:11 -03:00
if ( IS_ERR ( dom ) )
return dom ;
2020-04-29 15:36:46 +02:00
2023-05-11 01:42:11 -03:00
pr_warn ( " Failed to allocate default IOMMU domain of type %u for group %s - Falling back to IOMMU_DOMAIN_DMA " ,
iommu_def_domain_type , group - > name ) ;
return dom ;
2020-04-29 15:36:46 +02:00
}
2015-05-28 18:41:35 +02:00
struct iommu_domain * iommu_group_default_domain ( struct iommu_group * group )
{
return group - > default_domain ;
}
2020-04-29 15:36:49 +02:00
static int probe_iommu_group ( struct device * dev , void * data )
2011-10-21 15:56:05 -04:00
{
2020-04-29 15:36:49 +02:00
struct list_head * group_list = data ;
int ret ;
2015-06-29 10:16:08 +02:00
2023-11-15 18:25:44 +00:00
mutex_lock ( & iommu_probe_device_lock ) ;
2020-04-29 15:36:49 +02:00
ret = __iommu_probe_device ( dev , group_list ) ;
2023-11-15 18:25:44 +00:00
mutex_unlock ( & iommu_probe_device_lock ) ;
2015-06-29 10:16:08 +02:00
if ( ret = = - ENODEV )
ret = 0 ;
return ret ;
2011-10-21 15:56:05 -04:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
static int iommu_bus_notifier ( struct notifier_block * nb ,
unsigned long action , void * data )
2011-10-21 15:56:05 -04:00
{
struct device * dev = data ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
if ( action = = BUS_NOTIFY_ADD_DEVICE ) {
2018-11-30 10:31:59 +01:00
int ret ;
2017-04-18 20:51:48 +08:00
2018-11-30 10:31:59 +01:00
ret = iommu_probe_device ( dev ) ;
return ( ret ) ? NOTIFY_DONE : NOTIFY_OK ;
2015-05-28 18:41:28 +02:00
} else if ( action = = BUS_NOTIFY_REMOVED_DEVICE ) {
2018-11-30 10:31:59 +01:00
iommu_release_device ( dev ) ;
return NOTIFY_OK ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
}
2011-10-21 15:56:05 -04:00
return 0 ;
}
2023-09-13 10:43:41 -03:00
/*
* Combine the driver ' s chosen def_domain_type across all the devices in a
* group . Drivers must give a consistent result .
*/
static int iommu_get_def_domain_type ( struct iommu_group * group ,
struct device * dev , int cur_type )
{
2023-11-21 18:03:57 +00:00
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
2023-09-13 10:43:41 -03:00
int type ;
2024-01-30 12:12:53 -04:00
if ( ops - > default_domain ) {
/*
* Drivers that declare a global static default_domain will
* always choose that .
*/
type = ops - > default_domain - > type ;
} else {
if ( ops - > def_domain_type )
type = ops - > def_domain_type ( dev ) ;
else
return cur_type ;
}
2023-09-13 10:43:41 -03:00
if ( ! type | | cur_type = = type )
return cur_type ;
if ( ! cur_type )
return type ;
dev_err_ratelimited (
dev ,
" IOMMU driver error, requesting conflicting def_domain_type, %s and %s, for devices in group %u. \n " ,
iommu_domain_type_str ( cur_type ) , iommu_domain_type_str ( type ) ,
group - > id ) ;
/*
2025-01-28 19:05:21 +00:00
* Try to recover , drivers are allowed to force IDENTITY or DMA , IDENTITY
2023-09-13 10:43:41 -03:00
* takes precedence .
*/
if ( type = = IOMMU_DOMAIN_IDENTITY )
return type ;
return cur_type ;
}
/*
* A target_type of 0 will select the best domain type . 0 can be returned in
* this case meaning the global default should be used .
*/
2023-05-11 01:42:10 -03:00
static int iommu_get_default_domain_type ( struct iommu_group * group ,
int target_type )
2020-04-29 15:36:49 +02:00
{
2023-09-13 10:43:41 -03:00
struct device * untrusted = NULL ;
2023-05-11 01:42:10 -03:00
struct group_device * gdev ;
2023-09-13 10:43:41 -03:00
int driver_type = 0 ;
2020-04-29 15:36:49 +02:00
2023-05-11 01:42:10 -03:00
lockdep_assert_held ( & group - > mutex ) ;
2020-04-29 15:36:49 +02:00
2023-09-13 10:43:42 -03:00
/*
* ARM32 drivers supporting CONFIG_ARM_DMA_USE_IOMMU can declare an
* identity_domain and it will automatically become their default
* domain . Later on ARM_DMA_USE_IOMMU will install its UNMANAGED domain .
2023-09-13 10:43:53 -03:00
* Override the selection to IDENTITY .
2023-09-13 10:43:42 -03:00
*/
2023-09-13 10:43:53 -03:00
if ( IS_ENABLED ( CONFIG_ARM_DMA_USE_IOMMU ) ) {
static_assert ( ! ( IS_ENABLED ( CONFIG_ARM_DMA_USE_IOMMU ) & &
IS_ENABLED ( CONFIG_IOMMU_DMA ) ) ) ;
2023-09-13 10:43:42 -03:00
driver_type = IOMMU_DOMAIN_IDENTITY ;
2023-09-13 10:43:53 -03:00
}
2023-09-13 10:43:42 -03:00
2023-05-11 01:42:10 -03:00
for_each_group_device ( group , gdev ) {
2023-09-13 10:43:41 -03:00
driver_type = iommu_get_def_domain_type ( group , gdev - > dev ,
driver_type ) ;
2020-04-29 15:36:49 +02:00
2023-09-13 10:43:42 -03:00
if ( dev_is_pci ( gdev - > dev ) & & to_pci_dev ( gdev - > dev ) - > untrusted ) {
/*
* No ARM32 using systems will set untrusted , it cannot
* work .
*/
if ( WARN_ON ( IS_ENABLED ( CONFIG_ARM_DMA_USE_IOMMU ) ) )
2023-05-11 01:42:10 -03:00
return - 1 ;
2023-09-13 10:43:41 -03:00
untrusted = gdev - > dev ;
2023-09-13 10:43:42 -03:00
}
2023-09-13 10:43:41 -03:00
}
2020-04-29 15:36:49 +02:00
2023-10-03 13:52:36 -03:00
/*
* If the common dma ops are not selected in kconfig then we cannot use
* IOMMU_DOMAIN_DMA at all . Force IDENTITY if nothing else has been
* selected .
*/
if ( ! IS_ENABLED ( CONFIG_IOMMU_DMA ) ) {
if ( WARN_ON ( driver_type = = IOMMU_DOMAIN_DMA ) )
return - 1 ;
if ( ! driver_type )
driver_type = IOMMU_DOMAIN_IDENTITY ;
}
2023-09-13 10:43:41 -03:00
if ( untrusted ) {
if ( driver_type & & driver_type ! = IOMMU_DOMAIN_DMA ) {
dev_err_ratelimited (
untrusted ,
" Device is not trusted, but driver is overriding group %u to %s, refusing to probe. \n " ,
group - > id , iommu_domain_type_str ( driver_type ) ) ;
return - 1 ;
2020-04-29 15:36:49 +02:00
}
2023-09-13 10:43:41 -03:00
driver_type = IOMMU_DOMAIN_DMA ;
2020-04-29 15:36:49 +02:00
}
2023-09-13 10:43:41 -03:00
if ( target_type ) {
if ( driver_type & & target_type ! = driver_type )
return - 1 ;
return target_type ;
2020-04-29 15:36:49 +02:00
}
2023-09-13 10:43:41 -03:00
return driver_type ;
2020-04-29 15:36:49 +02:00
}
2023-05-11 01:42:14 -03:00
static void iommu_group_do_probe_finalize ( struct device * dev )
2020-05-19 15:28:24 +02:00
{
2022-02-16 10:52:47 +08:00
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
2020-05-19 15:28:24 +02:00
2022-02-16 10:52:47 +08:00
if ( ops - > probe_finalize )
ops - > probe_finalize ( dev ) ;
2020-04-29 15:36:50 +02:00
}
2024-10-28 17:58:38 +00:00
static int bus_iommu_probe ( const struct bus_type * bus )
2020-04-29 15:36:49 +02:00
{
2020-04-29 15:37:10 +02:00
struct iommu_group * group , * next ;
LIST_HEAD ( group_list ) ;
2020-04-29 15:36:49 +02:00
int ret ;
2020-04-29 15:37:10 +02:00
ret = bus_for_each_dev ( bus , NULL , & group_list , probe_iommu_group ) ;
if ( ret )
return ret ;
2020-04-29 15:36:49 +02:00
2020-04-29 15:37:10 +02:00
list_for_each_entry_safe ( group , next , & group_list , entry ) {
2023-05-11 01:42:14 -03:00
struct group_device * gdev ;
2022-11-04 19:51:43 +00:00
mutex_lock ( & group - > mutex ) ;
2020-04-29 15:37:10 +02:00
/* Remove item from the list */
list_del_init ( & group - > entry ) ;
2020-04-29 15:36:49 +02:00
2023-06-05 21:59:48 -03:00
/*
* We go to the trouble of deferred default domain creation so
* that the cross - group default domain type and the setup of the
* IOMMU_RESV_DIRECT will work correctly in non - hotpug scenarios .
*/
2023-05-11 01:42:12 -03:00
ret = iommu_setup_default_domain ( group , 0 ) ;
if ( ret ) {
2020-04-29 15:37:10 +02:00
mutex_unlock ( & group - > mutex ) ;
2023-05-11 01:42:12 -03:00
return ret ;
2020-04-29 15:37:10 +02:00
}
2024-04-19 17:54:45 +01:00
for_each_group_device ( group , gdev )
iommu_setup_dma_ops ( gdev - > dev ) ;
2020-04-29 15:37:10 +02:00
mutex_unlock ( & group - > mutex ) ;
2020-04-29 15:36:49 +02:00
2023-05-11 01:42:14 -03:00
/*
* FIXME : Mis - locked because the ops - > probe_finalize ( ) call - back
* of some IOMMU drivers calls arm_iommu_attach_device ( ) which
* in - turn might call back into IOMMU core code , where it tries
* to take group - > mutex , resulting in a deadlock .
*/
for_each_group_device ( group , gdev )
iommu_group_do_probe_finalize ( gdev - > dev ) ;
2020-04-29 15:36:49 +02:00
}
2023-05-11 01:42:12 -03:00
return 0 ;
2020-04-29 15:36:49 +02:00
}
2022-04-25 13:42:02 +01:00
/**
* device_iommu_capable ( ) - check for a general IOMMU capability
* @ dev : device to which the capability would be relevant , if available
* @ cap : IOMMU capability
*
* Return : true if an IOMMU is present and supports the given capability
* for the given device , otherwise false .
*/
bool device_iommu_capable ( struct device * dev , enum iommu_cap cap )
{
const struct iommu_ops * ops ;
2023-11-21 18:03:57 +00:00
if ( ! dev_has_iommu ( dev ) )
2022-04-25 13:42:02 +01:00
return false ;
ops = dev_iommu_ops ( dev ) ;
if ( ! ops - > capable )
return false ;
2022-08-15 16:26:49 +01:00
return ops - > capable ( dev , cap ) ;
2022-04-25 13:42:02 +01:00
}
EXPORT_SYMBOL_GPL ( device_iommu_capable ) ;
2022-12-09 13:23:08 -04:00
/**
* iommu_group_has_isolated_msi ( ) - Compute msi_device_has_isolated_msi ( )
* for a group
* @ group : Group to query
*
* IOMMU groups should not have differing values of
* msi_device_has_isolated_msi ( ) for devices in a group . However nothing
* directly prevents this , so ensure mistakes don ' t result in isolation failures
* by checking that all the devices are the same .
*/
bool iommu_group_has_isolated_msi ( struct iommu_group * group )
{
struct group_device * group_dev ;
bool ret = true ;
mutex_lock ( & group - > mutex ) ;
2023-05-11 01:42:00 -03:00
for_each_group_device ( group , group_dev )
2022-11-28 20:34:32 -04:00
ret & = msi_device_has_isolated_msi ( group_dev - > dev ) ;
2022-12-09 13:23:08 -04:00
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_group_has_isolated_msi ) ;
2011-09-13 15:25:23 -04:00
/**
* iommu_set_fault_handler ( ) - set a fault handler for an iommu domain
* @ domain : iommu domain
* @ handler : fault handler
2012-05-21 20:20:05 +03:00
* @ token : user data , will be passed back to the fault handler
2011-09-27 07:36:40 -04:00
*
* This function should be used by IOMMU users which want to be notified
* whenever an IOMMU fault happens .
*
* The fault handler itself should return 0 on success , and an appropriate
* error code otherwise .
2011-09-13 15:25:23 -04:00
*/
void iommu_set_fault_handler ( struct iommu_domain * domain ,
2012-05-21 20:20:05 +03:00
iommu_fault_handler_t handler ,
void * token )
2011-09-13 15:25:23 -04:00
{
iommu: Sort out domain user data
When DMA/MSI cookies were made first-class citizens back in commit
46983fcd67ac ("iommu: Pull IOVA cookie management into the core"), there
was no real need to further expose the two different cookie types.
However, now that IOMMUFD wants to add a third type of MSI-mapping
cookie, we do have a nicely compelling reason to properly dismabiguate
things at the domain level beyond just vaguely guessing from the domain
type.
Meanwhile, we also effectively have another "cookie" in the form of the
anonymous union for other user data, which isn't much better in terms of
being vague and unenforced. The fact is that all these cookie types are
mutually exclusive, in the sense that combining them makes zero sense
and/or would be catastrophic (iommu_set_fault_handler() on an SVA
domain, anyone?) - the only combination which *might* be reasonable is
perhaps a fault handler and an MSI cookie, but nobody's doing that at
the moment, so let's rule it out as well for the sake of being clear and
robust. To that end, we pull DMA and MSI cookies apart a little more,
mostly to clear up the ambiguity at domain teardown, then for clarity
(and to save a little space), move them into the union, whose ownership
we can then properly describe and enforce entirely unambiguously.
[nicolinc: rebase on latest tree; use prefix IOMMU_COOKIE_; merge unions
in iommu_domain; add IOMMU_COOKIE_IOMMUFD for iommufd_hwpt]
Link: https://patch.msgid.link/r/1ace9076c95204bbe193ee77499d395f15f44b23.1742871535.git.nicolinc@nvidia.com
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-24 21:05:15 -07:00
if ( WARN_ON ( ! domain | | domain - > cookie_type ! = IOMMU_COOKIE_NONE ) )
return ;
2011-09-13 15:25:23 -04:00
iommu: Sort out domain user data
When DMA/MSI cookies were made first-class citizens back in commit
46983fcd67ac ("iommu: Pull IOVA cookie management into the core"), there
was no real need to further expose the two different cookie types.
However, now that IOMMUFD wants to add a third type of MSI-mapping
cookie, we do have a nicely compelling reason to properly dismabiguate
things at the domain level beyond just vaguely guessing from the domain
type.
Meanwhile, we also effectively have another "cookie" in the form of the
anonymous union for other user data, which isn't much better in terms of
being vague and unenforced. The fact is that all these cookie types are
mutually exclusive, in the sense that combining them makes zero sense
and/or would be catastrophic (iommu_set_fault_handler() on an SVA
domain, anyone?) - the only combination which *might* be reasonable is
perhaps a fault handler and an MSI cookie, but nobody's doing that at
the moment, so let's rule it out as well for the sake of being clear and
robust. To that end, we pull DMA and MSI cookies apart a little more,
mostly to clear up the ambiguity at domain teardown, then for clarity
(and to save a little space), move them into the union, whose ownership
we can then properly describe and enforce entirely unambiguously.
[nicolinc: rebase on latest tree; use prefix IOMMU_COOKIE_; merge unions
in iommu_domain; add IOMMU_COOKIE_IOMMUFD for iommufd_hwpt]
Link: https://patch.msgid.link/r/1ace9076c95204bbe193ee77499d395f15f44b23.1742871535.git.nicolinc@nvidia.com
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-24 21:05:15 -07:00
domain - > cookie_type = IOMMU_COOKIE_FAULT_HANDLER ;
2011-09-13 15:25:23 -04:00
domain - > handler = handler ;
2012-05-21 20:20:05 +03:00
domain - > handler_token = token ;
2011-09-13 15:25:23 -04:00
}
2011-09-26 09:11:46 -04:00
EXPORT_SYMBOL_GPL ( iommu_set_fault_handler ) ;
2011-09-13 15:25:23 -04:00
2024-10-28 09:37:59 +00:00
static void iommu_domain_init ( struct iommu_domain * domain , unsigned int type ,
const struct iommu_ops * ops )
2008-11-26 17:21:24 +01:00
{
2015-05-28 18:41:29 +02:00
domain - > type = type ;
2023-11-21 18:03:59 +00:00
domain - > owner = ops ;
2024-10-28 09:37:59 +00:00
if ( ! domain - > ops )
domain - > ops = ops - > default_domain_ops ;
2008-11-26 17:21:24 +01:00
}
2023-09-13 10:43:54 -03:00
static struct iommu_domain *
2024-10-28 09:38:01 +00:00
__iommu_paging_domain_alloc_flags ( struct device * dev , unsigned int type ,
unsigned int flags )
2023-11-21 18:04:00 +00:00
{
2024-10-28 09:38:00 +00:00
const struct iommu_ops * ops ;
struct iommu_domain * domain ;
2023-11-21 18:04:00 +00:00
if ( ! dev_has_iommu ( dev ) )
2024-06-10 16:55:35 +08:00
return ERR_PTR ( - ENODEV ) ;
2023-11-21 18:04:00 +00:00
2024-10-28 09:38:00 +00:00
ops = dev_iommu_ops ( dev ) ;
2023-11-21 18:04:00 +00:00
2024-10-28 09:38:00 +00:00
if ( ops - > domain_alloc_paging & & ! flags )
domain = ops - > domain_alloc_paging ( dev ) ;
2024-11-14 15:55:31 -04:00
else if ( ops - > domain_alloc_paging_flags )
domain = ops - > domain_alloc_paging_flags ( dev , flags , NULL ) ;
2025-04-08 13:35:51 -03:00
# if IS_ENABLED(CONFIG_FSL_PAMU)
2024-10-28 09:38:00 +00:00
else if ( ops - > domain_alloc & & ! flags )
domain = ops - > domain_alloc ( IOMMU_DOMAIN_UNMANAGED ) ;
2025-04-08 13:35:51 -03:00
# endif
2024-10-28 09:38:00 +00:00
else
return ERR_PTR ( - EOPNOTSUPP ) ;
2023-11-21 18:04:00 +00:00
2023-11-01 20:28:11 -03:00
if ( IS_ERR ( domain ) )
2024-10-28 09:38:00 +00:00
return domain ;
if ( ! domain )
return ERR_PTR ( - ENOMEM ) ;
2024-10-28 09:38:01 +00:00
iommu_domain_init ( domain , type , ops ) ;
2023-11-01 20:28:11 -03:00
return domain ;
2008-11-26 17:21:24 +01:00
}
2024-06-10 16:55:35 +08:00
/**
2024-10-28 09:38:01 +00:00
* iommu_paging_domain_alloc_flags ( ) - Allocate a paging domain
2024-06-10 16:55:35 +08:00
* @ dev : device for which the domain is allocated
2024-10-28 09:38:01 +00:00
* @ flags : Bitmap of iommufd_hwpt_alloc_flags
2024-06-10 16:55:35 +08:00
*
* Allocate a paging domain which will be managed by a kernel driver . Return
2024-10-28 09:38:01 +00:00
* allocated domain if successful , or an ERR pointer for failure .
2024-06-10 16:55:35 +08:00
*/
2024-10-28 09:38:01 +00:00
struct iommu_domain * iommu_paging_domain_alloc_flags ( struct device * dev ,
unsigned int flags )
2024-06-10 16:55:35 +08:00
{
2024-10-28 09:38:01 +00:00
return __iommu_paging_domain_alloc_flags ( dev ,
IOMMU_DOMAIN_UNMANAGED , flags ) ;
2024-06-10 16:55:35 +08:00
}
2024-10-28 09:38:00 +00:00
EXPORT_SYMBOL_GPL ( iommu_paging_domain_alloc_flags ) ;
2024-06-10 16:55:35 +08:00
2008-11-26 17:21:24 +01:00
void iommu_domain_free ( struct iommu_domain * domain )
{
iommu: Sort out domain user data
When DMA/MSI cookies were made first-class citizens back in commit
46983fcd67ac ("iommu: Pull IOVA cookie management into the core"), there
was no real need to further expose the two different cookie types.
However, now that IOMMUFD wants to add a third type of MSI-mapping
cookie, we do have a nicely compelling reason to properly dismabiguate
things at the domain level beyond just vaguely guessing from the domain
type.
Meanwhile, we also effectively have another "cookie" in the form of the
anonymous union for other user data, which isn't much better in terms of
being vague and unenforced. The fact is that all these cookie types are
mutually exclusive, in the sense that combining them makes zero sense
and/or would be catastrophic (iommu_set_fault_handler() on an SVA
domain, anyone?) - the only combination which *might* be reasonable is
perhaps a fault handler and an MSI cookie, but nobody's doing that at
the moment, so let's rule it out as well for the sake of being clear and
robust. To that end, we pull DMA and MSI cookies apart a little more,
mostly to clear up the ambiguity at domain teardown, then for clarity
(and to save a little space), move them into the union, whose ownership
we can then properly describe and enforce entirely unambiguously.
[nicolinc: rebase on latest tree; use prefix IOMMU_COOKIE_; merge unions
in iommu_domain; add IOMMU_COOKIE_IOMMUFD for iommufd_hwpt]
Link: https://patch.msgid.link/r/1ace9076c95204bbe193ee77499d395f15f44b23.1742871535.git.nicolinc@nvidia.com
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-24 21:05:15 -07:00
switch ( domain - > cookie_type ) {
case IOMMU_COOKIE_DMA_IOVA :
iommu_put_dma_cookie ( domain ) ;
break ;
case IOMMU_COOKIE_DMA_MSI :
iommu_put_msi_cookie ( domain ) ;
break ;
case IOMMU_COOKIE_SVA :
2022-10-31 08:59:10 +08:00
mmdrop ( domain - > mm ) ;
iommu: Sort out domain user data
When DMA/MSI cookies were made first-class citizens back in commit
46983fcd67ac ("iommu: Pull IOVA cookie management into the core"), there
was no real need to further expose the two different cookie types.
However, now that IOMMUFD wants to add a third type of MSI-mapping
cookie, we do have a nicely compelling reason to properly dismabiguate
things at the domain level beyond just vaguely guessing from the domain
type.
Meanwhile, we also effectively have another "cookie" in the form of the
anonymous union for other user data, which isn't much better in terms of
being vague and unenforced. The fact is that all these cookie types are
mutually exclusive, in the sense that combining them makes zero sense
and/or would be catastrophic (iommu_set_fault_handler() on an SVA
domain, anyone?) - the only combination which *might* be reasonable is
perhaps a fault handler and an MSI cookie, but nobody's doing that at
the moment, so let's rule it out as well for the sake of being clear and
robust. To that end, we pull DMA and MSI cookies apart a little more,
mostly to clear up the ambiguity at domain teardown, then for clarity
(and to save a little space), move them into the union, whose ownership
we can then properly describe and enforce entirely unambiguously.
[nicolinc: rebase on latest tree; use prefix IOMMU_COOKIE_; merge unions
in iommu_domain; add IOMMU_COOKIE_IOMMUFD for iommufd_hwpt]
Link: https://patch.msgid.link/r/1ace9076c95204bbe193ee77499d395f15f44b23.1742871535.git.nicolinc@nvidia.com
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-24 21:05:15 -07:00
break ;
default :
break ;
}
2023-09-13 10:43:34 -03:00
if ( domain - > ops - > free )
domain - > ops - > free ( domain ) ;
2008-11-26 17:21:24 +01:00
}
EXPORT_SYMBOL_GPL ( iommu_domain_free ) ;
2022-05-09 13:19:19 -03:00
/*
* Put the group ' s domain back to the appropriate core - owned domain - either the
* standard kernel - mode DMA configuration or an all - DMA - blocked domain .
*/
static void __iommu_group_set_core_domain ( struct iommu_group * group )
{
struct iommu_domain * new_domain ;
if ( group - > owner )
new_domain = group - > blocking_domain ;
else
new_domain = group - > default_domain ;
2023-05-11 01:42:01 -03:00
__iommu_group_set_domain_nofail ( group , new_domain ) ;
2022-05-09 13:19:19 -03:00
}
2015-05-28 18:41:30 +02:00
static int __iommu_attach_device ( struct iommu_domain * domain ,
struct device * dev )
2008-11-26 17:21:24 +01:00
{
2013-08-15 11:59:26 -06:00
int ret ;
2017-08-09 16:33:40 +08:00
2011-09-06 16:44:29 +02:00
if ( unlikely ( domain - > ops - > attach_dev = = NULL ) )
return - ENODEV ;
2013-08-15 11:59:26 -06:00
ret = domain - > ops - > attach_dev ( domain , dev ) ;
2023-01-10 10:54:07 +08:00
if ( ret )
return ret ;
dev - > iommu - > attach_deferred = 0 ;
trace_attach_device_to_domain ( dev ) ;
return 0 ;
2008-11-26 17:21:24 +01:00
}
2015-05-28 18:41:30 +02:00
2022-10-17 16:01:22 -07:00
/**
* iommu_attach_device - Attach an IOMMU domain to a device
* @ domain : IOMMU domain to attach
* @ dev : Device that will be attached
*
* Returns 0 on success and error code on failure
*
* Note that EINVAL can be treated as a soft failure , indicating
* that certain configuration of the domain is incompatible with
* the device . In this case attaching a different domain to the
* device may succeed .
*/
2015-05-28 18:41:30 +02:00
int iommu_attach_device ( struct iommu_domain * domain , struct device * dev )
{
2023-08-22 13:15:56 -03:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2015-05-28 18:41:30 +02:00
int ret ;
2017-12-20 09:48:36 -07:00
if ( ! group )
return - ENODEV ;
2015-05-28 18:41:30 +02:00
/*
2017-07-21 13:12:38 +01:00
* Lock the group to make sure the device - count doesn ' t
2015-05-28 18:41:30 +02:00
* change while we are attaching
*/
mutex_lock ( & group - > mutex ) ;
ret = - EINVAL ;
2023-05-11 01:41:59 -03:00
if ( list_count_nodes ( & group - > devices ) ! = 1 )
2015-05-28 18:41:30 +02:00
goto out_unlock ;
2015-05-28 18:41:31 +02:00
ret = __iommu_attach_group ( domain , group ) ;
2015-05-28 18:41:30 +02:00
out_unlock :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
2008-11-26 17:21:24 +01:00
EXPORT_SYMBOL_GPL ( iommu_attach_device ) ;
iommu: use the __iommu_attach_device() directly for deferred attach
Currently, because domain attach allows to be deferred from iommu
driver to device driver, and when iommu initializes, the devices
on the bus will be scanned and the default groups will be allocated.
Due to the above changes, some devices could be added to the same
group as below:
[ 3.859417] pci 0000:01:00.0: Adding to iommu group 16
[ 3.864572] pci 0000:01:00.1: Adding to iommu group 16
[ 3.869738] pci 0000:02:00.0: Adding to iommu group 17
[ 3.874892] pci 0000:02:00.1: Adding to iommu group 17
But when attaching these devices, it doesn't allow that a group has
more than one device, otherwise it will return an error. This conflicts
with the deferred attaching. Unfortunately, it has two devices in the
same group for my side, for example:
[ 9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0
[ 9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1
...
[ 10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0
[ 10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1
Finally, which caused the failure of tg3 driver when tg3 driver calls
the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma().
[ 9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting
[ 9.754085] tg3: probe of 0000:01:00.0 failed with error -12
[ 9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting
[ 10.043053] tg3: probe of 0000:01:00.1 failed with error -12
[ 10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting
[ 10.334070] tg3: probe of 0000:02:00.0 failed with error -12
[ 10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting
[ 10.622629] tg3: probe of 0000:02:00.1 failed with error -12
In addition, the similar situations also occur in other drivers such
as the bnxt_en driver. That can be reproduced easily in kdump kernel
when SME is active.
Let's move the handling currently in iommu_dma_deferred_attach() into
the iommu core code so that it can call the __iommu_attach_device()
directly instead of the iommu_attach_device(). The external interface
iommu_attach_device() is not suitable for handling this situation.
Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-26 19:53:37 +08:00
int iommu_deferred_attach ( struct device * dev , struct iommu_domain * domain )
{
2023-01-10 10:54:07 +08:00
if ( dev - > iommu & & dev - > iommu - > attach_deferred )
iommu: use the __iommu_attach_device() directly for deferred attach
Currently, because domain attach allows to be deferred from iommu
driver to device driver, and when iommu initializes, the devices
on the bus will be scanned and the default groups will be allocated.
Due to the above changes, some devices could be added to the same
group as below:
[ 3.859417] pci 0000:01:00.0: Adding to iommu group 16
[ 3.864572] pci 0000:01:00.1: Adding to iommu group 16
[ 3.869738] pci 0000:02:00.0: Adding to iommu group 17
[ 3.874892] pci 0000:02:00.1: Adding to iommu group 17
But when attaching these devices, it doesn't allow that a group has
more than one device, otherwise it will return an error. This conflicts
with the deferred attaching. Unfortunately, it has two devices in the
same group for my side, for example:
[ 9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0
[ 9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1
...
[ 10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0
[ 10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1
Finally, which caused the failure of tg3 driver when tg3 driver calls
the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma().
[ 9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting
[ 9.754085] tg3: probe of 0000:01:00.0 failed with error -12
[ 9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting
[ 10.043053] tg3: probe of 0000:01:00.1 failed with error -12
[ 10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting
[ 10.334070] tg3: probe of 0000:02:00.0 failed with error -12
[ 10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting
[ 10.622629] tg3: probe of 0000:02:00.1 failed with error -12
In addition, the similar situations also occur in other drivers such
as the bnxt_en driver. That can be reproduced easily in kdump kernel
when SME is active.
Let's move the handling currently in iommu_dma_deferred_attach() into
the iommu core code so that it can call the __iommu_attach_device()
directly instead of the iommu_attach_device(). The external interface
iommu_attach_device() is not suitable for handling this situation.
Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-26 19:53:37 +08:00
return __iommu_attach_device ( domain , dev ) ;
return 0 ;
}
2015-05-28 18:41:30 +02:00
void iommu_detach_device ( struct iommu_domain * domain , struct device * dev )
{
2023-08-22 13:15:56 -03:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2015-05-28 18:41:30 +02:00
2017-12-20 09:48:36 -07:00
if ( ! group )
return ;
2015-05-28 18:41:30 +02:00
mutex_lock ( & group - > mutex ) ;
2022-05-09 13:19:19 -03:00
if ( WARN_ON ( domain ! = group - > domain ) | |
2023-05-11 01:41:59 -03:00
WARN_ON ( list_count_nodes ( & group - > devices ) ! = 1 ) )
2015-05-28 18:41:30 +02:00
goto out_unlock ;
2022-05-09 13:19:19 -03:00
__iommu_group_set_core_domain ( group ) ;
2015-05-28 18:41:30 +02:00
out_unlock :
mutex_unlock ( & group - > mutex ) ;
}
2008-11-26 17:21:24 +01:00
EXPORT_SYMBOL_GPL ( iommu_detach_device ) ;
2015-05-28 18:41:32 +02:00
struct iommu_domain * iommu_get_domain_for_dev ( struct device * dev )
{
2023-08-22 13:15:56 -03:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2015-05-28 18:41:32 +02:00
2017-08-17 11:40:08 +01:00
if ( ! group )
2015-05-28 18:41:32 +02:00
return NULL ;
2023-08-22 13:15:56 -03:00
return group - > domain ;
2015-05-28 18:41:32 +02:00
}
EXPORT_SYMBOL_GPL ( iommu_get_domain_for_dev ) ;
2008-11-26 17:21:24 +01:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
/*
2018-09-12 16:24:12 +01:00
* For IOMMU_DOMAIN_DMA implementations which already provide their own
* guarantees that the group and its default domain are valid and correct .
*/
struct iommu_domain * iommu_get_dma_domain ( struct device * dev )
{
return dev - > iommu_group - > default_domain ;
}
2025-02-25 17:18:48 -08:00
static void * iommu_make_pasid_array_entry ( struct iommu_domain * domain ,
struct iommu_attach_handle * handle )
{
if ( handle ) {
handle - > domain = domain ;
return xa_tag_pointer ( handle , IOMMU_PASID_ARRAY_HANDLE ) ;
}
return xa_tag_pointer ( domain , IOMMU_PASID_ARRAY_DOMAIN ) ;
}
2025-04-24 11:41:23 +08:00
static bool domain_iommu_ops_compatible ( const struct iommu_ops * ops ,
struct iommu_domain * domain )
{
if ( domain - > owner = = ops )
return true ;
/* For static domains, owner isn't set. */
if ( domain = = ops - > blocked_domain | | domain = = ops - > identity_domain )
return true ;
return false ;
}
2015-05-28 18:41:31 +02:00
static int __iommu_attach_group ( struct iommu_domain * domain ,
struct iommu_group * group )
{
2023-11-21 18:03:59 +00:00
struct device * dev ;
2022-05-09 13:19:19 -03:00
if ( group - > domain & & group - > domain ! = group - > default_domain & &
group - > domain ! = group - > blocking_domain )
2015-05-28 18:41:31 +02:00
return - EBUSY ;
2023-11-21 18:03:59 +00:00
dev = iommu_group_first_dev ( group ) ;
2025-04-24 11:41:23 +08:00
if ( ! dev_has_iommu ( dev ) | |
! domain_iommu_ops_compatible ( dev_iommu_ops ( dev ) , domain ) )
2023-11-21 18:03:59 +00:00
return - EINVAL ;
2023-05-11 01:42:02 -03:00
return __iommu_group_set_domain ( group , domain ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
}
2022-10-17 16:01:22 -07:00
/**
* iommu_attach_group - Attach an IOMMU domain to an IOMMU group
* @ domain : IOMMU domain to attach
* @ group : IOMMU group that will be attached
*
* Returns 0 on success and error code on failure
*
* Note that EINVAL can be treated as a soft failure , indicating
* that certain configuration of the domain is incompatible with
* the group . In this case attaching a different domain to the
* group may succeed .
*/
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
int iommu_attach_group ( struct iommu_domain * domain , struct iommu_group * group )
{
2015-05-28 18:41:31 +02:00
int ret ;
mutex_lock ( & group - > mutex ) ;
ret = __iommu_attach_group ( domain , group ) ;
mutex_unlock ( & group - > mutex ) ;
return ret ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
}
EXPORT_SYMBOL_GPL ( iommu_attach_group ) ;
2023-05-11 01:42:01 -03:00
static int __iommu_device_set_domain ( struct iommu_group * group ,
struct device * dev ,
struct iommu_domain * new_domain ,
unsigned int flags )
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
{
2023-05-11 01:42:01 -03:00
int ret ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
2023-08-09 20:48:02 +08:00
/*
* If the device requires IOMMU_RESV_DIRECT then we cannot allow
* the blocking domain to be attached as it does not contain the
* required 1 : 1 mapping . This test effectively excludes the device
* being used with iommu_group_claim_dma_owner ( ) which will block
* vfio and iommufd as well .
*/
if ( dev - > iommu - > require_direct & &
( new_domain - > type = = IOMMU_DOMAIN_BLOCKED | |
new_domain = = group - > blocking_domain ) ) {
dev_warn ( dev ,
" Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor. \n " ) ;
return - EINVAL ;
}
2023-05-11 01:42:04 -03:00
if ( dev - > iommu - > attach_deferred ) {
if ( new_domain = = group - > default_domain )
return 0 ;
dev - > iommu - > attach_deferred = 0 ;
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
2023-05-11 01:42:01 -03:00
ret = __iommu_attach_device ( new_domain , dev ) ;
if ( ret ) {
/*
* If we have a blocking domain then try to attach that in hopes
* of avoiding a UAF . Modern drivers should implement blocking
* domains as global statics that cannot fail .
*/
if ( ( flags & IOMMU_SET_DOMAIN_MUST_SUCCEED ) & &
group - > blocking_domain & &
group - > blocking_domain ! = new_domain )
__iommu_attach_device ( group - > blocking_domain , dev ) ;
return ret ;
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
return 0 ;
}
2023-05-11 01:42:01 -03:00
/*
* If 0 is returned the group ' s domain is new_domain . If an error is returned
* then the group ' s domain will be set back to the existing domain unless
* IOMMU_SET_DOMAIN_MUST_SUCCEED , otherwise an error is returned and the group ' s
* domains is left inconsistent . This is a driver bug to fail attach with a
* previously good domain . We try to avoid a kernel UAF because of this .
*
* IOMMU groups are really the natural working unit of the IOMMU , but the IOMMU
* API works on domains and devices . Bridge that gap by iterating over the
* devices in a group . Ideally we ' d have a single device which represents the
* requestor ID of the group , but we also allow IOMMU drivers to create policy
* defined minimum sets , where the physical hardware may be able to distiguish
* members , but we wish to group them at a higher level ( ex . untrusted
* multi - function PCI devices ) . Thus we attach each device .
*/
static int __iommu_group_set_domain_internal ( struct iommu_group * group ,
struct iommu_domain * new_domain ,
unsigned int flags )
2015-05-28 18:41:31 +02:00
{
2023-05-11 01:42:01 -03:00
struct group_device * last_gdev ;
struct group_device * gdev ;
int result ;
2015-05-28 18:41:31 +02:00
int ret ;
2023-05-11 01:42:01 -03:00
lockdep_assert_held ( & group - > mutex ) ;
2022-05-09 13:19:19 -03:00
if ( group - > domain = = new_domain )
return 0 ;
2023-09-13 10:43:48 -03:00
if ( WARN_ON ( ! new_domain ) )
return - EINVAL ;
2015-05-28 18:41:31 +02:00
2022-05-09 13:19:19 -03:00
/*
* Changing the domain is done by calling attach_dev ( ) on the new
* domain . This switch does not have to be atomic and DMA can be
* discarded during the transition . DMA must only be able to access
* either new_domain or group - > domain , never something else .
*/
2023-05-11 01:42:01 -03:00
result = 0 ;
for_each_group_device ( group , gdev ) {
ret = __iommu_device_set_domain ( group , gdev - > dev , new_domain ,
flags ) ;
if ( ret ) {
result = ret ;
/*
* Keep trying the other devices in the group . If a
* driver fails attach to an otherwise good domain , and
* does not support blocking domains , it should at least
* drop its reference on the current domain so we don ' t
* UAF .
*/
if ( flags & IOMMU_SET_DOMAIN_MUST_SUCCEED )
continue ;
goto err_revert ;
}
}
2022-05-09 13:19:19 -03:00
group - > domain = new_domain ;
2023-05-11 01:42:01 -03:00
return result ;
err_revert :
/*
* This is called in error unwind paths . A well behaved driver should
* always allow us to attach to a domain that was already attached .
*/
last_gdev = gdev ;
for_each_group_device ( group , gdev ) {
/*
2023-09-13 10:43:48 -03:00
* A NULL domain can happen only for first probe , in which case
* we leave group - > domain as NULL and let release clean
* everything up .
2023-05-11 01:42:01 -03:00
*/
if ( group - > domain )
WARN_ON ( __iommu_device_set_domain (
group , gdev - > dev , group - > domain ,
IOMMU_SET_DOMAIN_MUST_SUCCEED ) ) ;
if ( gdev = = last_gdev )
break ;
}
return ret ;
2015-05-28 18:41:31 +02:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
void iommu_detach_group ( struct iommu_domain * domain , struct iommu_group * group )
{
2015-05-28 18:41:31 +02:00
mutex_lock ( & group - > mutex ) ;
2022-05-09 13:19:19 -03:00
__iommu_group_set_core_domain ( group ) ;
2015-05-28 18:41:31 +02:00
mutex_unlock ( & group - > mutex ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
}
EXPORT_SYMBOL_GPL ( iommu_detach_group ) ;
2013-03-29 01:23:58 +05:30
phys_addr_t iommu_iova_to_phys ( struct iommu_domain * domain , dma_addr_t iova )
2008-11-26 17:21:24 +01:00
{
2021-07-15 14:04:24 +01:00
if ( domain - > type = = IOMMU_DOMAIN_IDENTITY )
return iova ;
if ( domain - > type = = IOMMU_DOMAIN_BLOCKED )
2011-09-06 16:44:29 +02:00
return 0 ;
return domain - > ops - > iova_to_phys ( domain , iova ) ;
2008-11-26 17:21:24 +01:00
}
EXPORT_SYMBOL_GPL ( iommu_iova_to_phys ) ;
2009-03-18 15:33:06 +08:00
2021-06-16 06:38:47 -07:00
static size_t iommu_pgsize ( struct iommu_domain * domain , unsigned long iova ,
2021-06-16 06:38:48 -07:00
phys_addr_t paddr , size_t size , size_t * count )
2013-06-17 19:57:34 -06:00
{
2021-06-16 06:38:48 -07:00
unsigned int pgsize_idx , pgsize_idx_next ;
2021-06-16 06:38:46 -07:00
unsigned long pgsizes ;
2021-06-16 06:38:48 -07:00
size_t offset , pgsize , pgsize_next ;
2025-04-25 10:08:37 -03:00
size_t offset_end ;
2021-06-16 06:38:47 -07:00
unsigned long addr_merge = paddr | iova ;
2013-06-17 19:57:34 -06:00
2021-06-16 06:38:46 -07:00
/* Page sizes supported by the hardware and small enough for @size */
pgsizes = domain - > pgsize_bitmap & GENMASK ( __fls ( size ) , 0 ) ;
2013-06-17 19:57:34 -06:00
2021-06-16 06:38:46 -07:00
/* Constrain the page sizes further based on the maximum alignment */
if ( likely ( addr_merge ) )
pgsizes & = GENMASK ( __ffs ( addr_merge ) , 0 ) ;
2013-06-17 19:57:34 -06:00
2021-06-16 06:38:46 -07:00
/* Make sure we have at least one suitable page size */
BUG_ON ( ! pgsizes ) ;
2013-06-17 19:57:34 -06:00
2021-06-16 06:38:46 -07:00
/* Pick the biggest page size remaining */
pgsize_idx = __fls ( pgsizes ) ;
pgsize = BIT ( pgsize_idx ) ;
2021-06-16 06:38:48 -07:00
if ( ! count )
return pgsize ;
2013-06-17 19:57:34 -06:00
2021-06-16 06:38:48 -07:00
/* Find the next biggest support page size, if it exists */
pgsizes = domain - > pgsize_bitmap & ~ GENMASK ( pgsize_idx , 0 ) ;
if ( ! pgsizes )
goto out_set_count ;
2013-06-17 19:57:34 -06:00
2021-06-16 06:38:48 -07:00
pgsize_idx_next = __ffs ( pgsizes ) ;
pgsize_next = BIT ( pgsize_idx_next ) ;
2013-06-17 19:57:34 -06:00
2021-06-16 06:38:48 -07:00
/*
* There ' s no point trying a bigger page size unless the virtual
* and physical addresses are similarly offset within the larger page .
*/
if ( ( iova ^ paddr ) & ( pgsize_next - 1 ) )
goto out_set_count ;
2013-06-17 19:57:34 -06:00
2021-06-16 06:38:48 -07:00
/* Calculate the offset to the next page size alignment boundary */
offset = pgsize_next - ( addr_merge & ( pgsize_next - 1 ) ) ;
2013-06-17 19:57:34 -06:00
2021-06-16 06:38:48 -07:00
/*
* If size is big enough to accommodate the larger page , reduce
* the number of smaller pages .
*/
2025-04-25 10:08:37 -03:00
if ( ! check_add_overflow ( offset , pgsize_next , & offset_end ) & &
offset_end < = size )
2021-06-16 06:38:48 -07:00
size = offset ;
out_set_count :
* count = size > > pgsize_idx ;
2013-06-17 19:57:34 -06:00
return pgsize ;
}
2025-05-05 10:01:40 +03:00
int iommu_map_nosync ( struct iommu_domain * domain , unsigned long iova ,
phys_addr_t paddr , size_t size , int prot , gfp_t gfp )
2010-01-08 13:35:09 +01:00
{
2022-02-16 10:52:49 +08:00
const struct iommu_domain_ops * ops = domain - > ops ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
unsigned long orig_iova = iova ;
unsigned int min_pagesz ;
size_t orig_size = size ;
2016-02-10 10:18:04 +09:00
phys_addr_t orig_paddr = paddr ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
int ret = 0 ;
2010-01-08 13:35:09 +01:00
2025-05-05 10:01:40 +03:00
might_sleep_if ( gfpflags_allow_blocking ( gfp ) ) ;
2015-03-26 13:43:06 +01:00
if ( unlikely ( ! ( domain - > type & __IOMMU_DOMAIN_PAGING ) ) )
return - EINVAL ;
2023-09-12 17:18:44 +01:00
if ( WARN_ON ( ! ops - > map_pages | | domain - > pgsize_bitmap = = 0UL ) )
return - ENODEV ;
2025-05-05 10:01:40 +03:00
/* Discourage passing strange GFP flags */
if ( WARN_ON_ONCE ( gfp & ( __GFP_COMP | __GFP_DMA | __GFP_DMA32 |
__GFP_HIGHMEM ) ) )
return - EINVAL ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
/* find out the minimum page size supported */
2016-04-07 18:42:06 +01:00
min_pagesz = 1 < < __ffs ( domain - > pgsize_bitmap ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
/*
* both the virtual address and the physical one , as well as
* the size of the mapping , must be aligned ( at least ) to the
* size of the smallest page supported by the hardware
*/
if ( ! IS_ALIGNED ( iova | paddr | size , min_pagesz ) ) {
2013-08-22 10:25:42 -03:00
pr_err ( " unaligned: iova 0x%lx pa %pa size 0x%zx min_pagesz 0x%x \n " ,
2013-06-23 12:29:04 -07:00
iova , & paddr , size , min_pagesz ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
return - EINVAL ;
}
2013-08-22 10:25:42 -03:00
pr_debug ( " map: iova 0x%lx pa %pa size 0x%zx \n " , iova , & paddr , size ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
while ( size ) {
2023-09-12 17:18:43 +01:00
size_t pgsize , count , mapped = 0 ;
pgsize = iommu_pgsize ( domain , iova , paddr , size , & count ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
2023-09-12 17:18:43 +01:00
pr_debug ( " mapping: iova 0x%lx pa %pa pgsize 0x%zx count %zu \n " ,
iova , & paddr , pgsize , count ) ;
ret = ops - > map_pages ( domain , iova , paddr , pgsize , count , prot ,
gfp , & mapped ) ;
2021-06-16 06:38:49 -07:00
/*
* Some pages may have been mapped , even if an error occurred ,
* so we should account for those so they can be unmapped .
*/
size - = mapped ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
if ( ret )
break ;
2021-06-16 06:38:49 -07:00
iova + = mapped ;
paddr + = mapped ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
}
/* unroll mapping in case something went wrong */
if ( ret )
iommu_unmap ( domain , orig_iova , orig_size - size ) ;
2013-08-15 11:59:28 -06:00
else
2016-02-10 10:18:04 +09:00
trace_map ( orig_iova , orig_paddr , orig_size ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
return ret ;
2010-01-08 13:35:09 +01:00
}
2019-09-08 09:56:38 -07:00
2025-05-05 10:01:40 +03:00
int iommu_sync_map ( struct iommu_domain * domain , unsigned long iova , size_t size )
2021-01-07 20:29:03 +08:00
{
2022-02-16 10:52:49 +08:00
const struct iommu_domain_ops * ops = domain - > ops ;
2021-01-07 20:29:03 +08:00
2025-05-05 10:01:40 +03:00
if ( ! ops - > iotlb_sync_map )
return 0 ;
return ops - > iotlb_sync_map ( domain , iova , size ) ;
}
2023-01-23 16:35:54 -04:00
2025-05-05 10:01:40 +03:00
int iommu_map ( struct iommu_domain * domain , unsigned long iova ,
phys_addr_t paddr , size_t size , int prot , gfp_t gfp )
{
int ret ;
2021-01-07 20:29:03 +08:00
2025-05-05 10:01:40 +03:00
ret = iommu_map_nosync ( domain , iova , paddr , size , prot , gfp ) ;
if ( ret )
return ret ;
2010-01-08 13:35:09 +01:00
2025-05-05 10:01:40 +03:00
ret = iommu_sync_map ( domain , iova , size ) ;
if ( ret )
iommu_unmap ( domain , iova , size ) ;
2021-06-16 06:38:48 -07:00
2021-01-07 20:29:03 +08:00
return ret ;
2021-06-16 06:38:48 -07:00
}
2010-01-08 13:35:09 +01:00
EXPORT_SYMBOL_GPL ( iommu_map ) ;
2021-06-16 06:38:48 -07:00
2017-08-23 15:50:04 +02:00
static size_t __iommu_unmap ( struct iommu_domain * domain ,
unsigned long iova , size_t size ,
2019-07-02 16:43:48 +01:00
struct iommu_iotlb_gather * iotlb_gather )
2010-01-08 13:35:09 +01:00
{
2022-02-16 10:52:49 +08:00
const struct iommu_domain_ops * ops = domain - > ops ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
size_t unmapped_page , unmapped = 0 ;
2015-01-16 16:47:19 -07:00
unsigned long orig_iova = iova ;
2017-08-23 15:50:04 +02:00
unsigned int min_pagesz ;
2010-01-08 13:35:09 +01:00
2023-09-12 17:18:44 +01:00
if ( unlikely ( ! ( domain - > type & __IOMMU_DOMAIN_PAGING ) ) )
2018-02-05 05:45:53 -05:00
return 0 ;
2011-09-06 16:44:29 +02:00
2023-09-12 17:18:44 +01:00
if ( WARN_ON ( ! ops - > unmap_pages | | domain - > pgsize_bitmap = = 0UL ) )
2018-02-05 05:45:53 -05:00
return 0 ;
2015-03-26 13:43:06 +01:00
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
/* find out the minimum page size supported */
2016-04-07 18:42:06 +01:00
min_pagesz = 1 < < __ffs ( domain - > pgsize_bitmap ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
/*
* The virtual address , as well as the size of the mapping , must be
* aligned ( at least ) to the size of the smallest page supported
* by the hardware
*/
if ( ! IS_ALIGNED ( iova | size , min_pagesz ) ) {
2013-06-23 12:29:04 -07:00
pr_err ( " unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x \n " ,
iova , size , min_pagesz ) ;
2018-02-05 05:45:53 -05:00
return 0 ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
}
2013-06-23 12:29:04 -07:00
pr_debug ( " unmap this: iova 0x%lx size 0x%zx \n " , iova , size ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
/*
* Keep iterating until we either unmap ' size ' bytes ( or more )
* or we hit an area that isn ' t mapped .
*/
while ( unmapped < size ) {
2023-09-12 17:18:43 +01:00
size_t pgsize , count ;
pgsize = iommu_pgsize ( domain , iova , iova , size - unmapped , & count ) ;
unmapped_page = ops - > unmap_pages ( domain , iova , pgsize , count , iotlb_gather ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
if ( ! unmapped_page )
break ;
2013-06-23 12:29:04 -07:00
pr_debug ( " unmapped: iova 0x%lx size 0x%zx \n " ,
iova , unmapped_page ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
iova + = unmapped_page ;
unmapped + = unmapped_page ;
}
2015-01-16 20:53:17 -07:00
trace_unmap ( orig_iova , size , unmapped ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 11:32:26 +02:00
return unmapped ;
2010-01-08 13:35:09 +01:00
}
2017-08-23 15:50:04 +02:00
2024-11-05 14:14:26 -04:00
/**
* iommu_unmap ( ) - Remove mappings from a range of IOVA
* @ domain : Domain to manipulate
* @ iova : IO virtual address to start
* @ size : Length of the range starting from @ iova
*
* iommu_unmap ( ) will remove a translation created by iommu_map ( ) . It cannot
* subdivide a mapping created by iommu_map ( ) , so it should be called with IOVA
* ranges that match what was passed to iommu_map ( ) . The range can aggregate
* contiguous iommu_map ( ) calls so long as no individual range is split .
*
* Returns : Number of bytes of IOVA unmapped . iova + res will be the point
* unmapping stopped .
*/
2017-08-23 15:50:04 +02:00
size_t iommu_unmap ( struct iommu_domain * domain ,
unsigned long iova , size_t size )
{
2019-07-02 16:43:48 +01:00
struct iommu_iotlb_gather iotlb_gather ;
size_t ret ;
iommu_iotlb_gather_init ( & iotlb_gather ) ;
ret = __iommu_unmap ( domain , iova , size , & iotlb_gather ) ;
2020-08-17 22:00:49 +01:00
iommu_iotlb_sync ( domain , & iotlb_gather ) ;
2019-07-02 16:43:48 +01:00
return ret ;
2017-08-23 15:50:04 +02:00
}
2010-01-08 13:35:09 +01:00
EXPORT_SYMBOL_GPL ( iommu_unmap ) ;
2011-10-21 15:56:05 -04:00
2025-05-05 10:01:41 +03:00
/**
* iommu_unmap_fast ( ) - Remove mappings from a range of IOVA without IOTLB sync
* @ domain : Domain to manipulate
* @ iova : IO virtual address to start
* @ size : Length of the range starting from @ iova
* @ iotlb_gather : range information for a pending IOTLB flush
*
* iommu_unmap_fast ( ) will remove a translation created by iommu_map ( ) .
* It can ' t subdivide a mapping created by iommu_map ( ) , so it should be
* called with IOVA ranges that match what was passed to iommu_map ( ) . The
* range can aggregate contiguous iommu_map ( ) calls so long as no individual
* range is split .
*
* Basically iommu_unmap_fast ( ) is the same as iommu_unmap ( ) but for callers
* which manage the IOTLB flushing externally to perform a batched sync .
*
* Returns : Number of bytes of IOVA unmapped . iova + res will be the point
* unmapping stopped .
*/
2017-08-23 15:50:04 +02:00
size_t iommu_unmap_fast ( struct iommu_domain * domain ,
2019-07-02 16:43:48 +01:00
unsigned long iova , size_t size ,
struct iommu_iotlb_gather * iotlb_gather )
2017-08-23 15:50:04 +02:00
{
2019-07-02 16:43:48 +01:00
return __iommu_unmap ( domain , iova , size , iotlb_gather ) ;
2017-08-23 15:50:04 +02:00
}
EXPORT_SYMBOL_GPL ( iommu_unmap_fast ) ;
2023-01-23 16:35:56 -04:00
ssize_t iommu_map_sg ( struct iommu_domain * domain , unsigned long iova ,
struct scatterlist * sg , unsigned int nents , int prot ,
gfp_t gfp )
2014-10-25 09:55:16 -07:00
{
2018-10-11 16:56:42 +01:00
size_t len = 0 , mapped = 0 ;
phys_addr_t start ;
unsigned int i = 0 ;
2014-11-04 14:53:51 +01:00
int ret ;
2014-10-25 09:55:16 -07:00
2018-10-11 16:56:42 +01:00
while ( i < = nents ) {
phys_addr_t s_phys = sg_phys ( sg ) ;
2014-11-25 17:50:55 +00:00
2018-10-11 16:56:42 +01:00
if ( len & & s_phys ! = start + len ) {
2025-05-05 10:01:40 +03:00
ret = iommu_map_nosync ( domain , iova + mapped , start ,
2019-09-08 09:56:38 -07:00
len , prot , gfp ) ;
2018-10-11 16:56:42 +01:00
if ( ret )
goto out_err ;
2014-11-25 17:50:55 +00:00
2018-10-11 16:56:42 +01:00
mapped + = len ;
len = 0 ;
}
2014-11-04 14:53:51 +01:00
2023-06-12 16:31:57 +01:00
if ( sg_dma_is_bus_address ( sg ) )
2022-07-08 10:50:58 -06:00
goto next ;
2018-10-11 16:56:42 +01:00
if ( len ) {
len + = sg - > length ;
} else {
len = sg - > length ;
start = s_phys ;
}
2014-11-04 14:53:51 +01:00
2022-07-08 10:50:58 -06:00
next :
2018-10-11 16:56:42 +01:00
if ( + + i < nents )
sg = sg_next ( sg ) ;
2014-10-25 09:55:16 -07:00
}
2025-05-05 10:01:40 +03:00
ret = iommu_sync_map ( domain , iova , mapped ) ;
if ( ret )
goto out_err ;
2014-10-25 09:55:16 -07:00
return mapped ;
2014-11-04 14:53:51 +01:00
out_err :
/* undo mappings already done */
iommu_unmap ( domain , iova , mapped ) ;
2021-07-29 14:15:21 -06:00
return ret ;
2014-10-25 09:55:16 -07:00
}
2018-07-30 09:36:26 +02:00
EXPORT_SYMBOL_GPL ( iommu_map_sg ) ;
2013-01-29 14:26:20 +01:00
2017-04-26 15:39:28 +02:00
/**
* report_iommu_fault ( ) - report about an IOMMU fault to the IOMMU framework
* @ domain : the iommu domain where the fault has happened
* @ dev : the device where the fault has happened
* @ iova : the faulting address
* @ flags : mmu fault flags ( e . g . IOMMU_FAULT_READ / IOMMU_FAULT_WRITE / . . . )
*
* This function should be called by the low - level IOMMU implementations
* whenever IOMMU faults happen , to allow high - level users , that are
* interested in such events , to know about them .
*
* This event may be useful for several possible use cases :
* - mere logging of the event
* - dynamic TLB / PTE loading
* - if restarting of the faulting device is required
*
* Returns 0 on success and an appropriate error code otherwise ( if dynamic
* PTE / TLB loading will one day be supported , implementations will be able
* to tell whether it succeeded or not according to this return value ) .
*
* Specifically , - ENOSYS is returned if a fault handler isn ' t installed
* ( though fault handlers can also return - ENOSYS , in case they want to
* elicit the default behavior of the IOMMU drivers ) .
*/
int report_iommu_fault ( struct iommu_domain * domain , struct device * dev ,
unsigned long iova , int flags )
{
int ret = - ENOSYS ;
/*
* if upper layers showed interest and installed a fault handler ,
* invoke it .
*/
2025-04-09 00:33:41 +03:00
if ( domain - > cookie_type = = IOMMU_COOKIE_FAULT_HANDLER & &
domain - > handler )
2017-04-26 15:39:28 +02:00
ret = domain - > handler ( domain , dev , iova , flags ,
domain - > handler_token ) ;
trace_io_page_fault ( dev , iova , flags ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( report_iommu_fault ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
static int __init iommu_init ( void )
2011-10-21 15:56:05 -04:00
{
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
iommu_group_kset = kset_create_and_add ( " iommu_groups " ,
NULL , kernel_kobj ) ;
BUG_ON ( ! iommu_group_kset ) ;
2018-06-12 16:41:21 -05:00
iommu_debugfs_setup ( ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-30 14:18:53 -06:00
return 0 ;
2011-10-21 15:56:05 -04:00
}
2015-05-19 15:20:23 +02:00
core_initcall ( iommu_init ) ;
2012-01-26 19:40:52 +01:00
2021-04-01 17:52:55 +02:00
int iommu_set_pgtable_quirks ( struct iommu_domain * domain ,
unsigned long quirk )
{
if ( domain - > type ! = IOMMU_DOMAIN_UNMANAGED )
return - EINVAL ;
if ( ! domain - > ops - > set_pgtable_quirks )
return - EINVAL ;
return domain - > ops - > set_pgtable_quirks ( domain , quirk ) ;
}
EXPORT_SYMBOL_GPL ( iommu_set_pgtable_quirks ) ;
2023-07-17 15:12:00 -03:00
/**
* iommu_get_resv_regions - get reserved regions
* @ dev : device for which to get reserved regions
* @ list : reserved region list for device
*
* This returns a list of reserved IOVA regions specific to this device .
* A domain user should not map IOVA in these ranges .
*/
2017-01-19 20:57:47 +00:00
void iommu_get_resv_regions ( struct device * dev , struct list_head * list )
2015-05-28 18:41:33 +02:00
{
2022-02-16 10:52:47 +08:00
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
2015-05-28 18:41:33 +02:00
2022-02-16 10:52:47 +08:00
if ( ops - > get_resv_regions )
2017-01-19 20:57:47 +00:00
ops - > get_resv_regions ( dev , list ) ;
2015-05-28 18:41:33 +02:00
}
2023-07-17 15:12:00 -03:00
EXPORT_SYMBOL_GPL ( iommu_get_resv_regions ) ;
2015-05-28 18:41:33 +02:00
2019-12-18 14:42:01 +01:00
/**
2023-07-17 15:12:00 -03:00
* iommu_put_resv_regions - release reserved regions
2019-12-18 14:42:01 +01:00
* @ dev : device for which to free reserved regions
* @ list : reserved region list for device
*
2022-07-08 10:06:15 +02:00
* This releases a reserved region list acquired by iommu_get_resv_regions ( ) .
2019-12-18 14:42:01 +01:00
*/
2022-07-08 10:06:15 +02:00
void iommu_put_resv_regions ( struct device * dev , struct list_head * list )
2019-12-18 14:42:01 +01:00
{
struct iommu_resv_region * entry , * next ;
2022-06-15 11:10:36 +01:00
list_for_each_entry_safe ( entry , next , list , list ) {
if ( entry - > free )
entry - > free ( dev , entry ) ;
else
kfree ( entry ) ;
}
2019-12-18 14:42:01 +01:00
}
2022-07-08 10:06:15 +02:00
EXPORT_SYMBOL ( iommu_put_resv_regions ) ;
2019-12-18 14:42:01 +01:00
2017-01-19 20:57:49 +00:00
struct iommu_resv_region * iommu_alloc_resv_region ( phys_addr_t start ,
iommu: Disambiguate MSI region types
The introduction of reserved regions has left a couple of rough edges
which we could do with sorting out sooner rather than later. Since we
are not yet addressing the potential dynamic aspect of software-managed
reservations and presenting them at arbitrary fixed addresses, it is
incongruous that we end up displaying hardware vs. software-managed MSI
regions to userspace differently, especially since ARM-based systems may
actually require one or the other, or even potentially both at once,
(which iommu-dma currently has no hope of dealing with at all). Let's
resolve the former user-visible inconsistency ASAP before the ABI has
been baked into a kernel release, in a way that also lays the groundwork
for the latter shortcoming to be addressed by follow-up patches.
For clarity, rename the software-managed type to IOMMU_RESV_SW_MSI, use
IOMMU_RESV_MSI to describe the hardware type, and document everything a
little bit. Since the x86 MSI remapping hardware falls squarely under
this meaning of IOMMU_RESV_MSI, apply that type to their regions as well,
so that we tell the same story to userspace across all platforms.
Secondly, as the various region types require quite different handling,
and it really makes little sense to ever try combining them, convert the
bitfield-esque #defines to a plain enum in the process before anyone
gets the wrong impression.
Fixes: d30ddcaa7b02 ("iommu: Add a new type field in iommu_resv_region")
Reviewed-by: Eric Auger <eric.auger@redhat.com>
CC: Alex Williamson <alex.williamson@redhat.com>
CC: David Woodhouse <dwmw2@infradead.org>
CC: kvm@vger.kernel.org
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-03-16 17:00:16 +00:00
size_t length , int prot ,
2022-10-19 08:44:44 +08:00
enum iommu_resv_type type ,
gfp_t gfp )
2017-01-19 20:57:49 +00:00
{
struct iommu_resv_region * region ;
2022-10-19 08:44:44 +08:00
region = kzalloc ( sizeof ( * region ) , gfp ) ;
2017-01-19 20:57:49 +00:00
if ( ! region )
return NULL ;
INIT_LIST_HEAD ( & region - > list ) ;
region - > start = start ;
region - > length = length ;
region - > prot = prot ;
region - > type = type ;
return region ;
2015-05-28 18:41:33 +02:00
}
2019-12-19 12:03:37 +00:00
EXPORT_SYMBOL_GPL ( iommu_alloc_resv_region ) ;
2015-05-28 18:41:36 +02:00
2019-08-19 15:22:47 +02:00
void iommu_set_default_passthrough ( bool cmd_line )
{
if ( cmd_line )
2021-04-01 17:52:53 +02:00
iommu_cmd_line | = IOMMU_CMD_LINE_DMA_API ;
2019-08-19 15:22:47 +02:00
iommu_def_domain_type = IOMMU_DOMAIN_IDENTITY ;
}
void iommu_set_default_translated ( bool cmd_line )
{
if ( cmd_line )
2021-04-01 17:52:53 +02:00
iommu_cmd_line | = IOMMU_CMD_LINE_DMA_API ;
2019-08-19 15:22:47 +02:00
iommu_def_domain_type = IOMMU_DOMAIN_DMA ;
}
bool iommu_default_passthrough ( void )
{
return iommu_def_domain_type = = IOMMU_DOMAIN_IDENTITY ;
}
EXPORT_SYMBOL_GPL ( iommu_default_passthrough ) ;
2025-04-24 18:41:28 +01:00
static const struct iommu_device * iommu_from_fwnode ( const struct fwnode_handle * fwnode )
2016-11-21 10:01:36 +00:00
{
2025-04-24 18:41:28 +01:00
const struct iommu_device * iommu , * ret = NULL ;
2016-11-21 10:01:36 +00:00
2017-02-02 12:19:12 +01:00
spin_lock ( & iommu_device_lock ) ;
list_for_each_entry ( iommu , & iommu_device_list , list )
if ( iommu - > fwnode = = fwnode ) {
2025-04-24 18:41:28 +01:00
ret = iommu ;
2016-11-21 10:01:36 +00:00
break ;
}
2017-02-02 12:19:12 +01:00
spin_unlock ( & iommu_device_lock ) ;
2025-04-24 18:41:28 +01:00
return ret ;
}
const struct iommu_ops * iommu_ops_from_fwnode ( const struct fwnode_handle * fwnode )
{
const struct iommu_device * iommu = iommu_from_fwnode ( fwnode ) ;
return iommu ? iommu - > ops : NULL ;
2016-11-21 10:01:36 +00:00
}
2024-07-02 12:40:48 +01:00
int iommu_fwspec_init ( struct device * dev , struct fwnode_handle * iommu_fwnode )
2016-09-13 10:54:14 +01:00
{
2025-04-24 18:41:28 +01:00
const struct iommu_device * iommu = iommu_from_fwnode ( iommu_fwnode ) ;
2018-11-28 13:35:24 +01:00
struct iommu_fwspec * fwspec = dev_iommu_fwspec_get ( dev ) ;
2016-09-13 10:54:14 +01:00
2025-04-24 18:41:28 +01:00
if ( ! iommu )
2024-12-05 16:33:58 +00:00
return driver_deferred_probe_check_state ( dev ) ;
2025-04-24 18:41:28 +01:00
if ( ! dev - > iommu & & ! READ_ONCE ( iommu - > ready ) )
return - EPROBE_DEFER ;
2024-07-02 12:40:48 +01:00
2016-09-13 10:54:14 +01:00
if ( fwspec )
2025-04-24 18:41:28 +01:00
return iommu - > ops = = iommu_fwspec_ops ( fwspec ) ? 0 : - EINVAL ;
2016-09-13 10:54:14 +01:00
2020-03-26 16:08:31 +01:00
if ( ! dev_iommu_get ( dev ) )
return - ENOMEM ;
2020-02-13 14:00:21 +00:00
/* Preallocate for the overwhelmingly common case of 1 ID */
fwspec = kzalloc ( struct_size ( fwspec , ids , 1 ) , GFP_KERNEL ) ;
2016-09-13 10:54:14 +01:00
if ( ! fwspec )
return - ENOMEM ;
2024-07-02 12:40:48 +01:00
fwnode_handle_get ( iommu_fwnode ) ;
2016-09-13 10:54:14 +01:00
fwspec - > iommu_fwnode = iommu_fwnode ;
2018-11-28 13:35:24 +01:00
dev_iommu_fwspec_set ( dev , fwspec ) ;
2016-09-13 10:54:14 +01:00
return 0 ;
}
EXPORT_SYMBOL_GPL ( iommu_fwspec_init ) ;
void iommu_fwspec_free ( struct device * dev )
{
2018-11-28 13:35:24 +01:00
struct iommu_fwspec * fwspec = dev_iommu_fwspec_get ( dev ) ;
2016-09-13 10:54:14 +01:00
if ( fwspec ) {
fwnode_handle_put ( fwspec - > iommu_fwnode ) ;
kfree ( fwspec ) ;
2018-11-28 13:35:24 +01:00
dev_iommu_fwspec_set ( dev , NULL ) ;
2016-09-13 10:54:14 +01:00
}
}
2024-02-16 15:40:25 +01:00
int iommu_fwspec_add_ids ( struct device * dev , const u32 * ids , int num_ids )
2016-09-13 10:54:14 +01:00
{
2018-11-28 13:35:24 +01:00
struct iommu_fwspec * fwspec = dev_iommu_fwspec_get ( dev ) ;
2020-02-13 14:00:21 +00:00
int i , new_num ;
2016-09-13 10:54:14 +01:00
if ( ! fwspec )
return - EINVAL ;
2020-02-13 14:00:21 +00:00
new_num = fwspec - > num_ids + num_ids ;
if ( new_num > 1 ) {
fwspec = krealloc ( fwspec , struct_size ( fwspec , ids , new_num ) ,
GFP_KERNEL ) ;
2016-09-13 10:54:14 +01:00
if ( ! fwspec )
return - ENOMEM ;
2017-02-03 17:35:02 +08:00
2018-11-28 13:35:24 +01:00
dev_iommu_fwspec_set ( dev , fwspec ) ;
2016-09-13 10:54:14 +01:00
}
for ( i = 0 ; i < num_ids ; i + + )
fwspec - > ids [ fwspec - > num_ids + i ] = ids [ i ] ;
2020-02-13 14:00:21 +00:00
fwspec - > num_ids = new_num ;
2016-09-13 10:54:14 +01:00
return 0 ;
}
EXPORT_SYMBOL_GPL ( iommu_fwspec_add_ids ) ;
iommu: Add APIs for multiple domains per device
Sharing a physical PCI device in a finer-granularity way
is becoming a consensus in the industry. IOMMU vendors
are also engaging efforts to support such sharing as well
as possible. Among the efforts, the capability of support
finer-granularity DMA isolation is a common requirement
due to the security consideration. With finer-granularity
DMA isolation, subsets of a PCI function can be isolated
from each others by the IOMMU. As a result, there is a
request in software to attach multiple domains to a physical
PCI device. One example of such use model is the Intel
Scalable IOV [1] [2]. The Intel vt-d 3.0 spec [3] introduces
the scalable mode which enables PASID granularity DMA
isolation.
This adds the APIs to support multiple domains per device.
In order to ease the discussions, we call it 'a domain in
auxiliary mode' or simply 'auxiliary domain' when multiple
domains are attached to a physical device.
The APIs include:
* iommu_dev_has_feature(dev, IOMMU_DEV_FEAT_AUX)
- Detect both IOMMU and PCI endpoint devices supporting
the feature (aux-domain here) without the host driver
dependency.
* iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX)
- Check the enabling status of the feature (aux-domain
here). The aux-domain interfaces are available only
if this returns true.
* iommu_dev_enable/disable_feature(dev, IOMMU_DEV_FEAT_AUX)
- Enable/disable device specific aux-domain feature.
* iommu_aux_attach_device(domain, dev)
- Attaches @domain to @dev in the auxiliary mode. Multiple
domains could be attached to a single device in the
auxiliary mode with each domain representing an isolated
address space for an assignable subset of the device.
* iommu_aux_detach_device(domain, dev)
- Detach @domain which has been attached to @dev in the
auxiliary mode.
* iommu_aux_get_pasid(domain, dev)
- Return ID used for finer-granularity DMA translation.
For the Intel Scalable IOV usage model, this will be
a PASID. The device which supports Scalable IOV needs
to write this ID to the device register so that DMA
requests could be tagged with a right PASID prefix.
This has been updated with the latest proposal from Joerg
posted here [5].
Many people involved in discussions of this design.
Kevin Tian <kevin.tian@intel.com>
Liu Yi L <yi.l.liu@intel.com>
Ashok Raj <ashok.raj@intel.com>
Sanjay Kumar <sanjay.k.kumar@intel.com>
Jacob Pan <jacob.jun.pan@linux.intel.com>
Alex Williamson <alex.williamson@redhat.com>
Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Joerg Roedel <joro@8bytes.org>
and some discussions can be found here [4] [5].
[1] https://software.intel.com/en-us/download/intel-scalable-io-virtualization-technical-specification
[2] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf
[3] https://software.intel.com/en-us/download/intel-virtualization-technology-for-directed-io-architecture-specification
[4] https://lkml.org/lkml/2018/7/26/4
[5] https://www.spinics.net/lists/iommu/msg31874.html
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Liu Yi L <yi.l.liu@intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Suggested-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Suggested-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-03-25 09:30:28 +08:00
2023-05-11 01:42:12 -03:00
/**
* iommu_setup_default_domain - Set the default_domain for the group
* @ group : Group to change
* @ target_type : Domain type to set as the default_domain
2020-11-24 21:06:02 +08:00
*
2023-05-11 01:42:12 -03:00
* Allocate a default domain and set it as the current domain on the group . If
* the group already has a default domain it will be changed to the target_type .
* When target_type is 0 the default domain is selected based on driver and
* system preferences .
2020-11-24 21:06:02 +08:00
*/
2023-05-11 01:42:12 -03:00
static int iommu_setup_default_domain ( struct iommu_group * group ,
int target_type )
2020-11-24 21:06:02 +08:00
{
2023-05-11 01:42:12 -03:00
struct iommu_domain * old_dom = group - > default_domain ;
struct group_device * gdev ;
struct iommu_domain * dom ;
2023-05-11 01:42:13 -03:00
bool direct_failed ;
2023-05-11 01:42:12 -03:00
int req_type ;
2023-03-22 14:49:56 +08:00
int ret ;
2020-11-24 21:06:02 +08:00
2023-03-22 14:49:54 +08:00
lockdep_assert_held ( & group - > mutex ) ;
2020-11-24 21:06:02 +08:00
2023-05-11 01:42:12 -03:00
req_type = iommu_get_default_domain_type ( group , target_type ) ;
if ( req_type < 0 )
2023-03-22 14:49:56 +08:00
return - EINVAL ;
2020-11-24 21:06:02 +08:00
2023-05-11 01:42:12 -03:00
dom = iommu_group_alloc_default_domain ( group , req_type ) ;
2023-11-01 20:28:11 -03:00
if ( IS_ERR ( dom ) )
return PTR_ERR ( dom ) ;
2020-11-24 21:06:02 +08:00
2023-05-11 01:42:12 -03:00
if ( group - > default_domain = = dom )
return 0 ;
2020-11-24 21:06:02 +08:00
2024-10-28 09:37:59 +00:00
if ( iommu_is_dma_domain ( dom ) ) {
ret = iommu_get_dma_cookie ( dom ) ;
if ( ret ) {
iommu_domain_free ( dom ) ;
return ret ;
}
}
2023-05-11 01:42:12 -03:00
/*
* IOMMU_RESV_DIRECT and IOMMU_RESV_DIRECT_RELAXABLE regions must be
* mapped before their device is attached , in order to guarantee
* continuity with any FW activity
*/
2023-05-11 01:42:13 -03:00
direct_failed = false ;
for_each_group_device ( group , gdev ) {
if ( iommu_create_device_direct_mappings ( dom , gdev - > dev ) ) {
direct_failed = true ;
dev_warn_once (
gdev - > dev - > iommu - > iommu_dev - > dev ,
" IOMMU driver was not able to establish FW requested direct mapping. " ) ;
}
}
2023-03-22 14:49:54 +08:00
2023-05-11 01:42:12 -03:00
/* We must set default_domain early for __iommu_device_set_domain */
group - > default_domain = dom ;
if ( ! group - > domain ) {
/*
* Drivers are not allowed to fail the first domain attach .
* The only way to recover from this is to fail attaching the
* iommu driver and call ops - > release_device . Put the domain
* in group - > default_domain so it is freed after .
*/
ret = __iommu_group_set_domain_internal (
group , dom , IOMMU_SET_DOMAIN_MUST_SUCCEED ) ;
if ( WARN_ON ( ret ) )
2023-06-26 12:13:11 -03:00
goto out_free_old ;
2023-05-11 01:42:12 -03:00
} else {
ret = __iommu_group_set_domain ( group , dom ) ;
2023-06-26 12:13:11 -03:00
if ( ret )
goto err_restore_def_domain ;
2023-05-11 01:42:12 -03:00
}
2020-11-24 21:06:02 +08:00
2023-05-11 01:42:13 -03:00
/*
* Drivers are supposed to allow mappings to be installed in a domain
* before device attachment , but some don ' t . Hack around this defect by
* trying again after attaching . If this happens it means the device
* will not continuously have the IOMMU_RESV_DIRECT map .
*/
if ( direct_failed ) {
for_each_group_device ( group , gdev ) {
ret = iommu_create_device_direct_mappings ( dom , gdev - > dev ) ;
if ( ret )
2023-06-26 12:13:11 -03:00
goto err_restore_domain ;
2023-05-11 01:42:13 -03:00
}
}
2020-11-24 21:06:02 +08:00
2023-06-26 12:13:11 -03:00
out_free_old :
if ( old_dom )
iommu_domain_free ( old_dom ) ;
return ret ;
err_restore_domain :
if ( old_dom )
2023-05-11 01:42:13 -03:00
__iommu_group_set_domain_internal (
group , old_dom , IOMMU_SET_DOMAIN_MUST_SUCCEED ) ;
2023-06-26 12:13:11 -03:00
err_restore_def_domain :
if ( old_dom ) {
2023-05-11 01:42:13 -03:00
iommu_domain_free ( dom ) ;
2023-06-26 12:13:11 -03:00
group - > default_domain = old_dom ;
2023-05-11 01:42:13 -03:00
}
2020-11-24 21:06:02 +08:00
return ret ;
}
/*
2021-08-11 13:21:38 +01:00
* Changing the default domain through sysfs requires the users to unbind the
* drivers from the devices in the iommu group , except for a DMA - > DMA - FQ
* transition . Return failure if this isn ' t met .
2020-11-24 21:06:02 +08:00
*
* We need to consider the race between this and the device release path .
2023-03-22 14:49:55 +08:00
* group - > mutex is used here to guarantee that the device release path
2020-11-24 21:06:02 +08:00
* will not be entered at the same time .
*/
static ssize_t iommu_group_store_type ( struct iommu_group * group ,
const char * buf , size_t count )
{
2023-05-11 01:42:15 -03:00
struct group_device * gdev ;
2020-11-24 21:06:02 +08:00
int ret , req_type ;
if ( ! capable ( CAP_SYS_ADMIN ) | | ! capable ( CAP_SYS_RAWIO ) )
return - EACCES ;
2022-05-04 13:39:58 +01:00
if ( WARN_ON ( ! group ) | | ! group - > default_domain )
2020-11-24 21:06:02 +08:00
return - EINVAL ;
if ( sysfs_streq ( buf , " identity " ) )
req_type = IOMMU_DOMAIN_IDENTITY ;
else if ( sysfs_streq ( buf , " DMA " ) )
req_type = IOMMU_DOMAIN_DMA ;
2021-08-11 13:21:35 +01:00
else if ( sysfs_streq ( buf , " DMA-FQ " ) )
req_type = IOMMU_DOMAIN_DMA_FQ ;
2020-11-24 21:06:02 +08:00
else if ( sysfs_streq ( buf , " auto " ) )
req_type = 0 ;
else
return - EINVAL ;
mutex_lock ( & group - > mutex ) ;
2023-03-22 14:49:55 +08:00
/* We can bring up a flush queue without tearing down the domain. */
if ( req_type = = IOMMU_DOMAIN_DMA_FQ & &
group - > default_domain - > type = = IOMMU_DOMAIN_DMA ) {
ret = iommu_dma_init_fq ( group - > default_domain ) ;
2023-05-11 01:42:15 -03:00
if ( ret )
goto out_unlock ;
2023-03-22 14:49:55 +08:00
2023-05-11 01:42:15 -03:00
group - > default_domain - > type = IOMMU_DOMAIN_DMA_FQ ;
ret = count ;
goto out_unlock ;
2023-03-22 14:49:55 +08:00
}
/* Otherwise, ensure that device exists and no driver is bound. */
if ( list_empty ( & group - > devices ) | | group - > owner_cnt ) {
2023-05-11 01:42:15 -03:00
ret = - EPERM ;
goto out_unlock ;
2020-11-24 21:06:02 +08:00
}
2023-05-11 01:42:12 -03:00
ret = iommu_setup_default_domain ( group , req_type ) ;
2023-05-11 01:42:15 -03:00
if ( ret )
goto out_unlock ;
2020-11-24 21:06:02 +08:00
2023-03-22 14:49:54 +08:00
/* Make sure dma_ops is appropriatley set */
2023-05-11 01:42:15 -03:00
for_each_group_device ( group , gdev )
2024-04-19 17:54:45 +01:00
iommu_setup_dma_ops ( gdev - > dev ) ;
2020-11-24 21:06:02 +08:00
2023-05-11 01:42:15 -03:00
out_unlock :
mutex_unlock ( & group - > mutex ) ;
2023-03-22 14:49:55 +08:00
return ret ? : count ;
2020-11-24 21:06:02 +08:00
}
2022-04-18 08:49:50 +08:00
/**
* iommu_device_use_default_domain ( ) - Device driver wants to handle device
* DMA through the kernel DMA API .
* @ dev : The device .
*
* The device driver about to bind @ dev wants to do DMA through the kernel
* DMA API . Return 0 if it is allowed , otherwise an error .
*/
int iommu_device_use_default_domain ( struct device * dev )
{
2023-08-22 13:15:56 -03:00
/* Caller is the driver core during the pre-probe path */
struct iommu_group * group = dev - > iommu_group ;
2022-04-18 08:49:50 +08:00
int ret = 0 ;
if ( ! group )
return 0 ;
mutex_lock ( & group - > mutex ) ;
2025-02-28 15:46:30 +00:00
/* We may race against bus_iommu_probe() finalising groups here */
if ( ! group - > default_domain ) {
ret = - EPROBE_DEFER ;
goto unlock_out ;
}
2022-04-18 08:49:50 +08:00
if ( group - > owner_cnt ) {
2023-10-06 09:57:06 +00:00
if ( group - > domain ! = group - > default_domain | | group - > owner | |
2022-10-31 08:59:09 +08:00
! xa_empty ( & group - > pasid_array ) ) {
2022-04-18 08:49:50 +08:00
ret = - EBUSY ;
goto unlock_out ;
}
}
group - > owner_cnt + + ;
unlock_out :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
/**
* iommu_device_unuse_default_domain ( ) - Device driver stops handling device
* DMA through the kernel DMA API .
* @ dev : The device .
*
* The device driver doesn ' t want to do DMA through kernel DMA API anymore .
* It must be called after iommu_device_use_default_domain ( ) .
*/
void iommu_device_unuse_default_domain ( struct device * dev )
{
2023-08-22 13:15:56 -03:00
/* Caller is the driver core during the post-probe path */
struct iommu_group * group = dev - > iommu_group ;
2022-04-18 08:49:50 +08:00
if ( ! group )
return ;
mutex_lock ( & group - > mutex ) ;
2022-10-31 08:59:09 +08:00
if ( ! WARN_ON ( ! group - > owner_cnt | | ! xa_empty ( & group - > pasid_array ) ) )
2022-04-18 08:49:50 +08:00
group - > owner_cnt - - ;
mutex_unlock ( & group - > mutex ) ;
}
2022-05-09 13:19:19 -03:00
static int __iommu_group_alloc_blocking_domain ( struct iommu_group * group )
{
2024-10-28 09:38:09 +00:00
struct device * dev = iommu_group_first_dev ( group ) ;
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
2023-11-01 20:28:11 -03:00
struct iommu_domain * domain ;
2022-05-09 13:19:19 -03:00
if ( group - > blocking_domain )
return 0 ;
2024-10-28 09:38:09 +00:00
if ( ops - > blocked_domain ) {
group - > blocking_domain = ops - > blocked_domain ;
return 0 ;
2022-05-09 13:19:19 -03:00
}
2024-10-28 09:38:09 +00:00
/*
* For drivers that do not yet understand IOMMU_DOMAIN_BLOCKED create an
* empty PAGING domain instead .
*/
domain = iommu_paging_domain_alloc ( dev ) ;
if ( IS_ERR ( domain ) )
return PTR_ERR ( domain ) ;
2023-11-01 20:28:11 -03:00
group - > blocking_domain = domain ;
2022-05-09 13:19:19 -03:00
return 0 ;
}
2022-11-29 16:29:25 -04:00
static int __iommu_take_dma_ownership ( struct iommu_group * group , void * owner )
{
int ret ;
if ( ( group - > domain & & group - > domain ! = group - > default_domain ) | |
! xa_empty ( & group - > pasid_array ) )
return - EBUSY ;
ret = __iommu_group_alloc_blocking_domain ( group ) ;
if ( ret )
return ret ;
ret = __iommu_group_set_domain ( group , group - > blocking_domain ) ;
if ( ret )
return ret ;
group - > owner = owner ;
group - > owner_cnt + + ;
return 0 ;
}
2022-04-18 08:49:50 +08:00
/**
* iommu_group_claim_dma_owner ( ) - Set DMA ownership of a group
* @ group : The group .
* @ owner : Caller specified pointer . Used for exclusive ownership .
*
2022-11-29 16:29:25 -04:00
* This is to support backward compatibility for vfio which manages the dma
* ownership in iommu_group level . New invocations on this interface should be
* prohibited . Only a single owner may exist for a group .
2022-04-18 08:49:50 +08:00
*/
int iommu_group_claim_dma_owner ( struct iommu_group * group , void * owner )
{
int ret = 0 ;
2022-11-29 16:29:25 -04:00
if ( WARN_ON ( ! owner ) )
return - EINVAL ;
2022-04-18 08:49:50 +08:00
mutex_lock ( & group - > mutex ) ;
if ( group - > owner_cnt ) {
ret = - EPERM ;
goto unlock_out ;
}
2022-11-29 16:29:25 -04:00
ret = __iommu_take_dma_ownership ( group , owner ) ;
2022-04-18 08:49:50 +08:00
unlock_out :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_group_claim_dma_owner ) ;
/**
2022-11-29 16:29:25 -04:00
* iommu_device_claim_dma_owner ( ) - Set DMA ownership of a device
* @ dev : The device .
* @ owner : Caller specified pointer . Used for exclusive ownership .
2022-04-18 08:49:50 +08:00
*
2022-11-29 16:29:25 -04:00
* Claim the DMA ownership of a device . Multiple devices in the same group may
* concurrently claim ownership if they present the same owner value . Returns 0
* on success and error code on failure
2022-04-18 08:49:50 +08:00
*/
2022-11-29 16:29:25 -04:00
int iommu_device_claim_dma_owner ( struct device * dev , void * owner )
2022-04-18 08:49:50 +08:00
{
2023-08-22 13:15:56 -03:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2022-11-29 16:29:25 -04:00
int ret = 0 ;
if ( WARN_ON ( ! owner ) )
return - EINVAL ;
2022-05-09 13:19:19 -03:00
2022-12-30 12:31:00 +04:00
if ( ! group )
return - ENODEV ;
2022-04-18 08:49:50 +08:00
mutex_lock ( & group - > mutex ) ;
2022-11-29 16:29:25 -04:00
if ( group - > owner_cnt ) {
if ( group - > owner ! = owner ) {
ret = - EPERM ;
goto unlock_out ;
}
group - > owner_cnt + + ;
goto unlock_out ;
}
ret = __iommu_take_dma_ownership ( group , owner ) ;
unlock_out :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_device_claim_dma_owner ) ;
static void __iommu_release_dma_ownership ( struct iommu_group * group )
{
2022-10-31 08:59:09 +08:00
if ( WARN_ON ( ! group - > owner_cnt | | ! group - > owner | |
! xa_empty ( & group - > pasid_array ) ) )
2022-11-29 16:29:25 -04:00
return ;
2022-04-18 08:49:50 +08:00
group - > owner_cnt = 0 ;
group - > owner = NULL ;
2023-05-11 01:42:01 -03:00
__iommu_group_set_domain_nofail ( group , group - > default_domain ) ;
2022-11-29 16:29:25 -04:00
}
2022-05-09 13:19:19 -03:00
2022-11-29 16:29:25 -04:00
/**
* iommu_group_release_dma_owner ( ) - Release DMA ownership of a group
2023-07-31 19:27:58 +08:00
* @ group : The group
2022-11-29 16:29:25 -04:00
*
* Release the DMA ownership claimed by iommu_group_claim_dma_owner ( ) .
*/
void iommu_group_release_dma_owner ( struct iommu_group * group )
{
mutex_lock ( & group - > mutex ) ;
__iommu_release_dma_ownership ( group ) ;
2022-04-18 08:49:50 +08:00
mutex_unlock ( & group - > mutex ) ;
}
EXPORT_SYMBOL_GPL ( iommu_group_release_dma_owner ) ;
2022-11-29 16:29:25 -04:00
/**
* iommu_device_release_dma_owner ( ) - Release DMA ownership of a device
2023-07-31 19:27:58 +08:00
* @ dev : The device .
2022-11-29 16:29:25 -04:00
*
* Release the DMA ownership claimed by iommu_device_claim_dma_owner ( ) .
*/
void iommu_device_release_dma_owner ( struct device * dev )
{
2023-08-22 13:15:56 -03:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2022-11-29 16:29:25 -04:00
mutex_lock ( & group - > mutex ) ;
if ( group - > owner_cnt > 1 )
group - > owner_cnt - - ;
else
__iommu_release_dma_ownership ( group ) ;
mutex_unlock ( & group - > mutex ) ;
}
EXPORT_SYMBOL_GPL ( iommu_device_release_dma_owner ) ;
2022-04-18 08:49:50 +08:00
/**
* iommu_group_dma_owner_claimed ( ) - Query group dma ownership status
* @ group : The group .
*
* This provides status query on a given group . It is racy and only for
* non - binding status reporting .
*/
bool iommu_group_dma_owner_claimed ( struct iommu_group * group )
{
unsigned int user ;
mutex_lock ( & group - > mutex ) ;
user = group - > owner_cnt ;
mutex_unlock ( & group - > mutex ) ;
return user ;
}
EXPORT_SYMBOL_GPL ( iommu_group_dma_owner_claimed ) ;
2022-10-31 08:59:09 +08:00
2024-12-04 04:29:23 -08:00
static void iommu_remove_dev_pasid ( struct device * dev , ioasid_t pasid ,
struct iommu_domain * domain )
{
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
2024-12-04 04:29:24 -08:00
struct iommu_domain * blocked_domain = ops - > blocked_domain ;
2024-12-04 04:29:23 -08:00
2024-12-04 04:29:28 -08:00
WARN_ON ( blocked_domain - > ops - > set_dev_pasid ( blocked_domain ,
dev , pasid , domain ) ) ;
2024-12-04 04:29:23 -08:00
}
2022-10-31 08:59:09 +08:00
static int __iommu_set_group_pasid ( struct iommu_domain * domain ,
2025-03-21 10:19:24 -07:00
struct iommu_group * group , ioasid_t pasid ,
struct iommu_domain * old )
2022-10-31 08:59:09 +08:00
{
2024-03-28 05:29:57 -07:00
struct group_device * device , * last_gdev ;
int ret ;
2022-10-31 08:59:09 +08:00
2023-05-11 01:42:00 -03:00
for_each_group_device ( group , device ) {
2025-05-19 18:19:37 -07:00
if ( device - > dev - > iommu - > max_pasids > 0 ) {
ret = domain - > ops - > set_dev_pasid ( domain , device - > dev ,
pasid , old ) ;
if ( ret )
goto err_revert ;
}
2022-10-31 08:59:09 +08:00
}
2024-03-28 05:29:57 -07:00
return 0 ;
err_revert :
last_gdev = device ;
for_each_group_device ( group , device ) {
if ( device = = last_gdev )
break ;
2025-05-19 18:19:37 -07:00
if ( device - > dev - > iommu - > max_pasids > 0 ) {
/*
* If no old domain , undo the succeeded devices / pasid .
* Otherwise , rollback the succeeded devices / pasid to
* the old domain . And it is a driver bug to fail
* attaching with a previously good domain .
*/
if ( ! old | |
WARN_ON ( old - > ops - > set_dev_pasid ( old , device - > dev ,
2025-03-21 10:19:24 -07:00
pasid , domain ) ) )
2025-05-19 18:19:37 -07:00
iommu_remove_dev_pasid ( device - > dev , pasid , domain ) ;
}
2024-03-28 05:29:57 -07:00
}
2022-10-31 08:59:09 +08:00
return ret ;
}
static void __iommu_remove_group_pasid ( struct iommu_group * group ,
2024-03-28 05:29:58 -07:00
ioasid_t pasid ,
struct iommu_domain * domain )
2022-10-31 08:59:09 +08:00
{
struct group_device * device ;
2025-05-19 18:19:37 -07:00
for_each_group_device ( group , device ) {
if ( device - > dev - > iommu - > max_pasids > 0 )
iommu_remove_dev_pasid ( device - > dev , pasid , domain ) ;
}
2022-10-31 08:59:09 +08:00
}
/*
* iommu_attach_device_pasid ( ) - Attach a domain to pasid of device
* @ domain : the iommu domain .
* @ dev : the attached device .
* @ pasid : the pasid of the device .
2024-07-02 14:34:35 +08:00
* @ handle : the attach handle .
2022-10-31 08:59:09 +08:00
*
2025-03-21 10:19:23 -07:00
* Caller should always provide a new handle to avoid race with the paths
* that have lockless reference to handle if it intends to pass a valid handle .
*
2022-10-31 08:59:09 +08:00
* Return : 0 on success , or an error .
*/
int iommu_attach_device_pasid ( struct iommu_domain * domain ,
2024-07-02 14:34:35 +08:00
struct device * dev , ioasid_t pasid ,
struct iommu_attach_handle * handle )
2022-10-31 08:59:09 +08:00
{
2023-08-22 13:15:56 -03:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2024-03-27 10:41:39 -03:00
struct group_device * device ;
2024-12-04 04:29:22 -08:00
const struct iommu_ops * ops ;
2025-02-25 17:18:48 -08:00
void * entry ;
2022-10-31 08:59:09 +08:00
int ret ;
if ( ! group )
return - ENODEV ;
2024-12-04 04:29:22 -08:00
ops = dev_iommu_ops ( dev ) ;
if ( ! domain - > ops - > set_dev_pasid | |
2024-12-04 04:29:28 -08:00
! ops - > blocked_domain | |
! ops - > blocked_domain - > ops - > set_dev_pasid )
2024-12-04 04:29:22 -08:00
return - EOPNOTSUPP ;
2025-04-24 11:41:23 +08:00
if ( ! domain_iommu_ops_compatible ( ops , domain ) | |
pasid = = IOMMU_NO_PASID )
2023-11-21 18:03:59 +00:00
return - EINVAL ;
2022-10-31 08:59:09 +08:00
mutex_lock ( & group - > mutex ) ;
2024-03-27 10:41:39 -03:00
for_each_group_device ( group , device ) {
2025-05-19 18:19:37 -07:00
/*
* Skip PASID validation for devices without PASID support
* ( max_pasids = 0 ) . These devices cannot issue transactions
* with PASID , so they don ' t affect group ' s PASID usage .
*/
if ( ( device - > dev - > iommu - > max_pasids > 0 ) & &
( pasid > = device - > dev - > iommu - > max_pasids ) ) {
2024-03-27 10:41:39 -03:00
ret = - EINVAL ;
goto out_unlock ;
}
}
2025-02-25 17:18:48 -08:00
entry = iommu_make_pasid_array_entry ( domain , handle ) ;
2024-07-02 14:34:35 +08:00
2025-02-25 17:18:49 -08:00
/*
* Entry present is a failure case . Use xa_insert ( ) instead of
* xa_reserve ( ) .
*/
ret = xa_insert ( & group - > pasid_array , pasid , XA_ZERO_ENTRY , GFP_KERNEL ) ;
2024-07-02 14:34:35 +08:00
if ( ret )
2022-10-31 08:59:09 +08:00
goto out_unlock ;
2025-03-21 10:19:24 -07:00
ret = __iommu_set_group_pasid ( domain , group , pasid , NULL ) ;
2025-02-25 17:18:49 -08:00
if ( ret ) {
xa_release ( & group - > pasid_array , pasid ) ;
goto out_unlock ;
}
/*
* The xa_insert ( ) above reserved the memory , and the group - > mutex is
* held , this cannot fail . The new domain cannot be visible until the
* operation succeeds as we cannot tolerate PRIs becoming concurrently
* queued and then failing attach .
*/
WARN_ON ( xa_is_err ( xa_store ( & group - > pasid_array ,
pasid , entry , GFP_KERNEL ) ) ) ;
2022-10-31 08:59:09 +08:00
out_unlock :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_attach_device_pasid ) ;
2025-03-21 10:19:24 -07:00
/**
* iommu_replace_device_pasid - Replace the domain that a specific pasid
* of the device is attached to
* @ domain : the new iommu domain
* @ dev : the attached device .
* @ pasid : the pasid of the device .
* @ handle : the attach handle .
*
* This API allows the pasid to switch domains . The @ pasid should have been
* attached . Otherwise , this fails . The pasid will keep the old configuration
* if replacement failed .
*
* Caller should always provide a new handle to avoid race with the paths
* that have lockless reference to handle if it intends to pass a valid handle .
*
* Return 0 on success , or an error .
*/
int iommu_replace_device_pasid ( struct iommu_domain * domain ,
struct device * dev , ioasid_t pasid ,
struct iommu_attach_handle * handle )
{
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
struct iommu_attach_handle * entry ;
struct iommu_domain * curr_domain ;
void * curr ;
int ret ;
if ( ! group )
return - ENODEV ;
if ( ! domain - > ops - > set_dev_pasid )
return - EOPNOTSUPP ;
2025-04-24 11:41:23 +08:00
if ( ! domain_iommu_ops_compatible ( dev_iommu_ops ( dev ) , domain ) | |
2025-03-21 10:19:24 -07:00
pasid = = IOMMU_NO_PASID | | ! handle )
return - EINVAL ;
mutex_lock ( & group - > mutex ) ;
entry = iommu_make_pasid_array_entry ( domain , handle ) ;
curr = xa_cmpxchg ( & group - > pasid_array , pasid , NULL ,
XA_ZERO_ENTRY , GFP_KERNEL ) ;
if ( xa_is_err ( curr ) ) {
ret = xa_err ( curr ) ;
goto out_unlock ;
}
/*
* No domain ( with or without handle ) attached , hence not
* a replace case .
*/
if ( ! curr ) {
xa_release ( & group - > pasid_array , pasid ) ;
ret = - EINVAL ;
goto out_unlock ;
}
/*
* Reusing handle is problematic as there are paths that refers
* the handle without lock . To avoid race , reject the callers that
* attempt it .
*/
if ( curr = = entry ) {
WARN_ON ( 1 ) ;
ret = - EINVAL ;
goto out_unlock ;
}
curr_domain = pasid_array_entry_to_domain ( curr ) ;
ret = 0 ;
if ( curr_domain ! = domain ) {
ret = __iommu_set_group_pasid ( domain , group ,
pasid , curr_domain ) ;
if ( ret )
goto out_unlock ;
}
/*
* The above xa_cmpxchg ( ) reserved the memory , and the
* group - > mutex is held , this cannot fail .
*/
WARN_ON ( xa_is_err ( xa_store ( & group - > pasid_array ,
pasid , entry , GFP_KERNEL ) ) ) ;
out_unlock :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_NS_GPL ( iommu_replace_device_pasid , " IOMMUFD_INTERNAL " ) ;
2022-10-31 08:59:09 +08:00
/*
* iommu_detach_device_pasid ( ) - Detach the domain from pasid of device
* @ domain : the iommu domain .
* @ dev : the attached device .
* @ pasid : the pasid of the device .
*
* The @ domain must have been attached to @ pasid of the @ dev with
* iommu_attach_device_pasid ( ) .
*/
void iommu_detach_device_pasid ( struct iommu_domain * domain , struct device * dev ,
ioasid_t pasid )
{
2023-08-22 13:15:56 -03:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2022-10-31 08:59:09 +08:00
mutex_lock ( & group - > mutex ) ;
2024-03-28 05:29:58 -07:00
__iommu_remove_group_pasid ( group , pasid , domain ) ;
2024-07-02 14:34:35 +08:00
xa_erase ( & group - > pasid_array , pasid ) ;
2022-10-31 08:59:09 +08:00
mutex_unlock ( & group - > mutex ) ;
}
EXPORT_SYMBOL_GPL ( iommu_detach_device_pasid ) ;
2023-08-09 20:47:55 +08:00
ioasid_t iommu_alloc_global_pasid ( struct device * dev )
{
int ret ;
/* max_pasids == 0 means that the device does not support PASID */
if ( ! dev - > iommu - > max_pasids )
return IOMMU_PASID_INVALID ;
/*
* max_pasids is set up by vendor driver based on number of PASID bits
* supported but the IDA allocation is inclusive .
*/
ret = ida_alloc_range ( & iommu_global_pasid_ida , IOMMU_FIRST_GLOBAL_PASID ,
dev - > iommu - > max_pasids - 1 , GFP_KERNEL ) ;
return ret < 0 ? IOMMU_PASID_INVALID : ret ;
}
EXPORT_SYMBOL_GPL ( iommu_alloc_global_pasid ) ;
void iommu_free_global_pasid ( ioasid_t pasid )
{
if ( WARN_ON ( pasid = = IOMMU_PASID_INVALID ) )
return ;
ida_free ( & iommu_global_pasid_ida , pasid ) ;
}
EXPORT_SYMBOL_GPL ( iommu_free_global_pasid ) ;
2024-07-02 14:34:36 +08:00
/**
* iommu_attach_handle_get - Return the attach handle
* @ group : the iommu group that domain was attached to
* @ pasid : the pasid within the group
* @ type : matched domain type , 0 for any match
*
* Return handle or ERR_PTR ( - ENOENT ) on none , ERR_PTR ( - EBUSY ) on mismatch .
*
* Return the attach handle to the caller . The life cycle of an iommu attach
* handle is from the time when the domain is attached to the time when the
* domain is detached . Callers are required to synchronize the call of
* iommu_attach_handle_get ( ) with domain attachment and detachment . The attach
* handle can only be used during its life cycle .
*/
struct iommu_attach_handle *
iommu_attach_handle_get ( struct iommu_group * group , ioasid_t pasid , unsigned int type )
{
struct iommu_attach_handle * handle ;
2025-02-25 17:18:48 -08:00
void * entry ;
2024-07-02 14:34:36 +08:00
xa_lock ( & group - > pasid_array ) ;
2025-02-25 17:18:48 -08:00
entry = xa_load ( & group - > pasid_array , pasid ) ;
if ( ! entry | | xa_pointer_tag ( entry ) ! = IOMMU_PASID_ARRAY_HANDLE ) {
2024-07-02 14:34:36 +08:00
handle = ERR_PTR ( - ENOENT ) ;
2025-02-25 17:18:48 -08:00
} else {
handle = xa_untag_pointer ( entry ) ;
if ( type & & handle - > domain - > type ! = type )
handle = ERR_PTR ( - EBUSY ) ;
}
2024-07-02 14:34:36 +08:00
xa_unlock ( & group - > pasid_array ) ;
return handle ;
}
module: Convert symbol namespace to string literal
Clean up the existing export namespace code along the same lines of
commit 33def8498fdd ("treewide: Convert macro and uses of __section(foo)
to __section("foo")") and for the same reason, it is not desired for the
namespace argument to be a macro expansion itself.
Scripted using
git grep -l -e MODULE_IMPORT_NS -e EXPORT_SYMBOL_NS | while read file;
do
awk -i inplace '
/^#define EXPORT_SYMBOL_NS/ {
gsub(/__stringify\(ns\)/, "ns");
print;
next;
}
/^#define MODULE_IMPORT_NS/ {
gsub(/__stringify\(ns\)/, "ns");
print;
next;
}
/MODULE_IMPORT_NS/ {
$0 = gensub(/MODULE_IMPORT_NS\(([^)]*)\)/, "MODULE_IMPORT_NS(\"\\1\")", "g");
}
/EXPORT_SYMBOL_NS/ {
if ($0 ~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+),/) {
if ($0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/ &&
$0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(\)/ &&
$0 !~ /^my/) {
getline line;
gsub(/[[:space:]]*\\$/, "");
gsub(/[[:space:]]/, "", line);
$0 = $0 " " line;
}
$0 = gensub(/(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/,
"\\1(\\2, \"\\3\")", "g");
}
}
{ print }' $file;
done
Requested-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://mail.google.com/mail/u/2/#inbox/FMfcgzQXKWgMmjdFwwdsfgxzKpVHWPlc
Acked-by: Greg KH <gregkh@linuxfoundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-02 15:59:47 +01:00
EXPORT_SYMBOL_NS_GPL ( iommu_attach_handle_get , " IOMMUFD_INTERNAL " ) ;
2024-07-02 14:34:38 +08:00
/**
* iommu_attach_group_handle - Attach an IOMMU domain to an IOMMU group
* @ domain : IOMMU domain to attach
* @ group : IOMMU group that will be attached
* @ handle : attach handle
*
* Returns 0 on success and error code on failure .
*
* This is a variant of iommu_attach_group ( ) . It allows the caller to provide
* an attach handle and use it when the domain is attached . This is currently
* used by IOMMUFD to deliver the I / O page faults .
2025-03-21 10:19:23 -07:00
*
* Caller should always provide a new handle to avoid race with the paths
* that have lockless reference to handle .
2024-07-02 14:34:38 +08:00
*/
int iommu_attach_group_handle ( struct iommu_domain * domain ,
struct iommu_group * group ,
struct iommu_attach_handle * handle )
{
2025-02-25 17:18:48 -08:00
void * entry ;
2024-07-02 14:34:38 +08:00
int ret ;
2025-02-25 17:18:46 -08:00
if ( ! handle )
return - EINVAL ;
2024-07-02 14:34:38 +08:00
mutex_lock ( & group - > mutex ) ;
2025-02-25 17:18:48 -08:00
entry = iommu_make_pasid_array_entry ( domain , handle ) ;
2025-02-25 17:18:49 -08:00
ret = xa_insert ( & group - > pasid_array ,
IOMMU_NO_PASID , XA_ZERO_ENTRY , GFP_KERNEL ) ;
2024-07-02 14:34:38 +08:00
if ( ret )
2025-02-25 17:18:49 -08:00
goto out_unlock ;
2024-07-02 14:34:38 +08:00
ret = __iommu_attach_group ( domain , group ) ;
2025-02-25 17:18:49 -08:00
if ( ret ) {
xa_release ( & group - > pasid_array , IOMMU_NO_PASID ) ;
goto out_unlock ;
}
2024-07-02 14:34:38 +08:00
2025-02-25 17:18:49 -08:00
/*
* The xa_insert ( ) above reserved the memory , and the group - > mutex is
* held , this cannot fail . The new domain cannot be visible until the
* operation succeeds as we cannot tolerate PRIs becoming concurrently
* queued and then failing attach .
*/
WARN_ON ( xa_is_err ( xa_store ( & group - > pasid_array ,
IOMMU_NO_PASID , entry , GFP_KERNEL ) ) ) ;
out_unlock :
2024-07-02 14:34:38 +08:00
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
module: Convert symbol namespace to string literal
Clean up the existing export namespace code along the same lines of
commit 33def8498fdd ("treewide: Convert macro and uses of __section(foo)
to __section("foo")") and for the same reason, it is not desired for the
namespace argument to be a macro expansion itself.
Scripted using
git grep -l -e MODULE_IMPORT_NS -e EXPORT_SYMBOL_NS | while read file;
do
awk -i inplace '
/^#define EXPORT_SYMBOL_NS/ {
gsub(/__stringify\(ns\)/, "ns");
print;
next;
}
/^#define MODULE_IMPORT_NS/ {
gsub(/__stringify\(ns\)/, "ns");
print;
next;
}
/MODULE_IMPORT_NS/ {
$0 = gensub(/MODULE_IMPORT_NS\(([^)]*)\)/, "MODULE_IMPORT_NS(\"\\1\")", "g");
}
/EXPORT_SYMBOL_NS/ {
if ($0 ~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+),/) {
if ($0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/ &&
$0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(\)/ &&
$0 !~ /^my/) {
getline line;
gsub(/[[:space:]]*\\$/, "");
gsub(/[[:space:]]/, "", line);
$0 = $0 " " line;
}
$0 = gensub(/(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/,
"\\1(\\2, \"\\3\")", "g");
}
}
{ print }' $file;
done
Requested-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://mail.google.com/mail/u/2/#inbox/FMfcgzQXKWgMmjdFwwdsfgxzKpVHWPlc
Acked-by: Greg KH <gregkh@linuxfoundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-02 15:59:47 +01:00
EXPORT_SYMBOL_NS_GPL ( iommu_attach_group_handle , " IOMMUFD_INTERNAL " ) ;
2024-07-02 14:34:38 +08:00
/**
* iommu_detach_group_handle - Detach an IOMMU domain from an IOMMU group
* @ domain : IOMMU domain to attach
* @ group : IOMMU group that will be attached
*
* Detach the specified IOMMU domain from the specified IOMMU group .
* It must be used in conjunction with iommu_attach_group_handle ( ) .
*/
void iommu_detach_group_handle ( struct iommu_domain * domain ,
struct iommu_group * group )
{
mutex_lock ( & group - > mutex ) ;
__iommu_group_set_core_domain ( group ) ;
xa_erase ( & group - > pasid_array , IOMMU_NO_PASID ) ;
mutex_unlock ( & group - > mutex ) ;
}
module: Convert symbol namespace to string literal
Clean up the existing export namespace code along the same lines of
commit 33def8498fdd ("treewide: Convert macro and uses of __section(foo)
to __section("foo")") and for the same reason, it is not desired for the
namespace argument to be a macro expansion itself.
Scripted using
git grep -l -e MODULE_IMPORT_NS -e EXPORT_SYMBOL_NS | while read file;
do
awk -i inplace '
/^#define EXPORT_SYMBOL_NS/ {
gsub(/__stringify\(ns\)/, "ns");
print;
next;
}
/^#define MODULE_IMPORT_NS/ {
gsub(/__stringify\(ns\)/, "ns");
print;
next;
}
/MODULE_IMPORT_NS/ {
$0 = gensub(/MODULE_IMPORT_NS\(([^)]*)\)/, "MODULE_IMPORT_NS(\"\\1\")", "g");
}
/EXPORT_SYMBOL_NS/ {
if ($0 ~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+),/) {
if ($0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/ &&
$0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(\)/ &&
$0 !~ /^my/) {
getline line;
gsub(/[[:space:]]*\\$/, "");
gsub(/[[:space:]]/, "", line);
$0 = $0 " " line;
}
$0 = gensub(/(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/,
"\\1(\\2, \"\\3\")", "g");
}
}
{ print }' $file;
done
Requested-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://mail.google.com/mail/u/2/#inbox/FMfcgzQXKWgMmjdFwwdsfgxzKpVHWPlc
Acked-by: Greg KH <gregkh@linuxfoundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-02 15:59:47 +01:00
EXPORT_SYMBOL_NS_GPL ( iommu_detach_group_handle , " IOMMUFD_INTERNAL " ) ;
2024-07-02 14:34:38 +08:00
/**
* iommu_replace_group_handle - replace the domain that a group is attached to
* @ group : IOMMU group that will be attached to the new domain
* @ new_domain : new IOMMU domain to replace with
* @ handle : attach handle
*
2025-02-25 17:18:47 -08:00
* This API allows the group to switch domains without being forced to go to
* the blocking domain in - between . It allows the caller to provide an attach
* handle for the new domain and use it when the domain is attached .
*
* If the currently attached domain is a core domain ( e . g . a default_domain ) ,
* it will act just like the iommu_attach_group_handle ( ) .
2025-03-21 10:19:23 -07:00
*
* Caller should always provide a new handle to avoid race with the paths
* that have lockless reference to handle .
2024-07-02 14:34:38 +08:00
*/
int iommu_replace_group_handle ( struct iommu_group * group ,
struct iommu_domain * new_domain ,
struct iommu_attach_handle * handle )
{
2025-02-25 17:18:48 -08:00
void * curr , * entry ;
2024-07-02 14:34:38 +08:00
int ret ;
2025-02-25 17:18:46 -08:00
if ( ! new_domain | | ! handle )
2024-07-02 14:34:38 +08:00
return - EINVAL ;
mutex_lock ( & group - > mutex ) ;
2025-02-25 17:18:48 -08:00
entry = iommu_make_pasid_array_entry ( new_domain , handle ) ;
2025-02-25 17:18:46 -08:00
ret = xa_reserve ( & group - > pasid_array , IOMMU_NO_PASID , GFP_KERNEL ) ;
if ( ret )
goto err_unlock ;
2024-07-02 14:34:38 +08:00
ret = __iommu_group_set_domain ( group , new_domain ) ;
if ( ret )
goto err_release ;
2025-02-25 17:18:48 -08:00
curr = xa_store ( & group - > pasid_array , IOMMU_NO_PASID , entry , GFP_KERNEL ) ;
2024-07-02 14:34:38 +08:00
WARN_ON ( xa_is_err ( curr ) ) ;
mutex_unlock ( & group - > mutex ) ;
return 0 ;
err_release :
xa_release ( & group - > pasid_array , IOMMU_NO_PASID ) ;
err_unlock :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
module: Convert symbol namespace to string literal
Clean up the existing export namespace code along the same lines of
commit 33def8498fdd ("treewide: Convert macro and uses of __section(foo)
to __section("foo")") and for the same reason, it is not desired for the
namespace argument to be a macro expansion itself.
Scripted using
git grep -l -e MODULE_IMPORT_NS -e EXPORT_SYMBOL_NS | while read file;
do
awk -i inplace '
/^#define EXPORT_SYMBOL_NS/ {
gsub(/__stringify\(ns\)/, "ns");
print;
next;
}
/^#define MODULE_IMPORT_NS/ {
gsub(/__stringify\(ns\)/, "ns");
print;
next;
}
/MODULE_IMPORT_NS/ {
$0 = gensub(/MODULE_IMPORT_NS\(([^)]*)\)/, "MODULE_IMPORT_NS(\"\\1\")", "g");
}
/EXPORT_SYMBOL_NS/ {
if ($0 ~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+),/) {
if ($0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/ &&
$0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(\)/ &&
$0 !~ /^my/) {
getline line;
gsub(/[[:space:]]*\\$/, "");
gsub(/[[:space:]]/, "", line);
$0 = $0 " " line;
}
$0 = gensub(/(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/,
"\\1(\\2, \"\\3\")", "g");
}
}
{ print }' $file;
done
Requested-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://mail.google.com/mail/u/2/#inbox/FMfcgzQXKWgMmjdFwwdsfgxzKpVHWPlc
Acked-by: Greg KH <gregkh@linuxfoundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-02 15:59:47 +01:00
EXPORT_SYMBOL_NS_GPL ( iommu_replace_group_handle , " IOMMUFD_INTERNAL " ) ;
2025-02-19 17:31:38 -08:00
# if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
/**
* iommu_dma_prepare_msi ( ) - Map the MSI page in the IOMMU domain
* @ desc : MSI descriptor , will store the MSI page
* @ msi_addr : MSI target address to be mapped
*
* The implementation of sw_msi ( ) should take msi_addr and map it to
* an IOVA in the domain and call msi_desc_set_iommu_msi_iova ( ) with the
* mapping information .
*
* Return : 0 on success or negative error code if the mapping failed .
*/
int iommu_dma_prepare_msi ( struct msi_desc * desc , phys_addr_t msi_addr )
{
struct device * dev = msi_desc_to_dev ( desc ) ;
struct iommu_group * group = dev - > iommu_group ;
int ret = 0 ;
if ( ! group )
return 0 ;
mutex_lock ( & group - > mutex ) ;
2025-03-24 21:05:17 -07:00
/* An IDENTITY domain must pass through */
if ( group - > domain & & group - > domain - > type ! = IOMMU_DOMAIN_IDENTITY ) {
switch ( group - > domain - > cookie_type ) {
case IOMMU_COOKIE_DMA_MSI :
case IOMMU_COOKIE_DMA_IOVA :
ret = iommu_dma_sw_msi ( group - > domain , desc , msi_addr ) ;
break ;
case IOMMU_COOKIE_IOMMUFD :
ret = iommufd_sw_msi ( group - > domain , desc , msi_addr ) ;
break ;
default :
ret = - EOPNOTSUPP ;
break ;
}
}
2025-02-19 17:31:38 -08:00
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
# endif /* CONFIG_IRQ_MSI_IOMMU */