Merge patch series "prep patches for my mkdir series"

NeilBrown <neilb@suse.de> says:

These two patches are cleanup are dependencies for my mkdir changes and
subsequence directory locking changes.

* patches from https://lore.kernel.org/r/20250226062135.2043651-1-neilb@suse.de: (2 commits)
  nfsd: drop fh_update() from S_IFDIR branch of nfsd_create_locked()
  nfs/vfs: discard d_exact_alias()

Link: https://lore.kernel.org/r/20250226062135.2043651-1-neilb@suse.de
Signed-off-by: Christian Brauner <brauner@kernel.org>
This commit is contained in:
Christian Brauner 2025-02-26 09:55:24 +01:00
commit 71628584df
No known key found for this signature in database
GPG key ID: 91C61BC06578DCA2
820 changed files with 9627 additions and 5286 deletions

View file

@ -226,6 +226,7 @@ Fangrui Song <i@maskray.me> <maskray@google.com>
Felipe W Damasio <felipewd@terra.com.br> Felipe W Damasio <felipewd@terra.com.br>
Felix Kuhling <fxkuehl@gmx.de> Felix Kuhling <fxkuehl@gmx.de>
Felix Moeller <felix@derklecks.de> Felix Moeller <felix@derklecks.de>
Feng Tang <feng.79.tang@gmail.com> <feng.tang@intel.com>
Fenglin Wu <quic_fenglinw@quicinc.com> <fenglinw@codeaurora.org> Fenglin Wu <quic_fenglinw@quicinc.com> <fenglinw@codeaurora.org>
Filipe Lautert <filipe@icewall.org> Filipe Lautert <filipe@icewall.org>
Finn Thain <fthain@linux-m68k.org> <fthain@telegraphics.com.au> Finn Thain <fthain@linux-m68k.org> <fthain@telegraphics.com.au>
@ -317,6 +318,8 @@ Jayachandran C <c.jayachandran@gmail.com> <jnair@caviumnetworks.com>
Jean Tourrilhes <jt@hpl.hp.com> Jean Tourrilhes <jt@hpl.hp.com>
Jeevan Shriram <quic_jshriram@quicinc.com> <jshriram@codeaurora.org> Jeevan Shriram <quic_jshriram@quicinc.com> <jshriram@codeaurora.org>
Jeff Garzik <jgarzik@pretzel.yyz.us> Jeff Garzik <jgarzik@pretzel.yyz.us>
Jeff Johnson <jeff.johnson@oss.qualcomm.com> <jjohnson@codeaurora.org>
Jeff Johnson <jeff.johnson@oss.qualcomm.com> <quic_jjohnson@quicinc.com>
Jeff Layton <jlayton@kernel.org> <jlayton@poochiereds.net> Jeff Layton <jlayton@kernel.org> <jlayton@poochiereds.net>
Jeff Layton <jlayton@kernel.org> <jlayton@primarydata.com> Jeff Layton <jlayton@kernel.org> <jlayton@primarydata.com>
Jeff Layton <jlayton@kernel.org> <jlayton@redhat.com> Jeff Layton <jlayton@kernel.org> <jlayton@redhat.com>
@ -376,6 +379,7 @@ Juha Yrjola <juha.yrjola@solidboot.com>
Julien Thierry <julien.thierry.kdev@gmail.com> <julien.thierry@arm.com> Julien Thierry <julien.thierry.kdev@gmail.com> <julien.thierry@arm.com>
Iskren Chernev <me@iskren.info> <iskren.chernev@gmail.com> Iskren Chernev <me@iskren.info> <iskren.chernev@gmail.com>
Kalle Valo <kvalo@kernel.org> <kvalo@codeaurora.org> Kalle Valo <kvalo@kernel.org> <kvalo@codeaurora.org>
Kalle Valo <kvalo@kernel.org> <quic_kvalo@quicinc.com>
Kalyan Thota <quic_kalyant@quicinc.com> <kalyan_t@codeaurora.org> Kalyan Thota <quic_kalyant@quicinc.com> <kalyan_t@codeaurora.org>
Karthikeyan Periyasamy <quic_periyasa@quicinc.com> <periyasa@codeaurora.org> Karthikeyan Periyasamy <quic_periyasa@quicinc.com> <periyasa@codeaurora.org>
Kathiravan T <quic_kathirav@quicinc.com> <kathirav@codeaurora.org> Kathiravan T <quic_kathirav@quicinc.com> <kathirav@codeaurora.org>
@ -530,6 +534,7 @@ Nicholas Piggin <npiggin@gmail.com> <npiggin@kernel.dk>
Nicholas Piggin <npiggin@gmail.com> <npiggin@suse.de> Nicholas Piggin <npiggin@gmail.com> <npiggin@suse.de>
Nicholas Piggin <npiggin@gmail.com> <nickpiggin@yahoo.com.au> Nicholas Piggin <npiggin@gmail.com> <nickpiggin@yahoo.com.au>
Nicholas Piggin <npiggin@gmail.com> <piggin@cyberone.com.au> Nicholas Piggin <npiggin@gmail.com> <piggin@cyberone.com.au>
Nick Desaulniers <nick.desaulniers+lkml@gmail.com> <ndesaulniers@google.com>
Nicolas Ferre <nicolas.ferre@microchip.com> <nicolas.ferre@atmel.com> Nicolas Ferre <nicolas.ferre@microchip.com> <nicolas.ferre@atmel.com>
Nicolas Pitre <nico@fluxnic.net> <nicolas.pitre@linaro.org> Nicolas Pitre <nico@fluxnic.net> <nicolas.pitre@linaro.org>
Nicolas Pitre <nico@fluxnic.net> <nico@linaro.org> Nicolas Pitre <nico@fluxnic.net> <nico@linaro.org>

View file

@ -2515,11 +2515,9 @@ D: SLS distribution
D: Initial implementation of VC's, pty's and select() D: Initial implementation of VC's, pty's and select()
N: Pavel Machek N: Pavel Machek
E: pavel@ucw.cz E: pavel@kernel.org
P: 4096R/92DFCE96 4FA7 9EEF FCD4 C44F C585 B8C7 C060 2241 92DF CE96 P: 4096R/92DFCE96 4FA7 9EEF FCD4 C44F C585 B8C7 C060 2241 92DF CE96
D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd, D: NBD, Sun4/330 port, USB, work on suspend-to-ram/disk,
D: sun4/330 port, capabilities for elf, speedup for rm on ext2, USB,
D: work on suspend-to-ram/disk, killing duplicates from ioctl32,
D: Altera SoCFPGA and Nokia N900 support. D: Altera SoCFPGA and Nokia N900 support.
S: Czech Republic S: Czech Republic

View file

@ -37,7 +37,7 @@ intended to be exhaustive.
shadow stacks rather than GCS. shadow stacks rather than GCS.
* Support for GCS is reported to userspace via HWCAP_GCS in the aux vector * Support for GCS is reported to userspace via HWCAP_GCS in the aux vector
AT_HWCAP2 entry. AT_HWCAP entry.
* GCS is enabled per thread. While there is support for disabling GCS * GCS is enabled per thread. While there is support for disabling GCS
at runtime this should be done with great care. at runtime this should be done with great care.

View file

@ -25,7 +25,7 @@ to cache translations for virtual addresses. The IOMMU driver uses the
mmu_notifier() support to keep the device TLB cache and the CPU cache in mmu_notifier() support to keep the device TLB cache and the CPU cache in
sync. When an ATS lookup fails for a virtual address, the device should sync. When an ATS lookup fails for a virtual address, the device should
use the PRI in order to request the virtual address to be paged into the use the PRI in order to request the virtual address to be paged into the
CPU page tables. The device must use ATS again in order the fetch the CPU page tables. The device must use ATS again in order to fetch the
translation before use. translation before use.
Shared Hardware Workqueues Shared Hardware Workqueues
@ -216,7 +216,7 @@ submitting work and processing completions.
Single Root I/O Virtualization (SR-IOV) focuses on providing independent Single Root I/O Virtualization (SR-IOV) focuses on providing independent
hardware interfaces for virtualizing hardware. Hence, it's required to be hardware interfaces for virtualizing hardware. Hence, it's required to be
almost fully functional interface to software supporting the traditional an almost fully functional interface to software supporting the traditional
BARs, space for interrupts via MSI-X, its own register layout. BARs, space for interrupts via MSI-X, its own register layout.
Virtual Functions (VFs) are assisted by the Physical Function (PF) Virtual Functions (VFs) are assisted by the Physical Function (PF)
driver. driver.

View file

@ -53,11 +53,17 @@ properties:
reg: reg:
maxItems: 1 maxItems: 1
power-controller:
type: object
reboot-mode:
type: object
required: required:
- compatible - compatible
- reg - reg
additionalProperties: true additionalProperties: false
examples: examples:
- | - |

View file

@ -8,6 +8,7 @@ title: Qualcomm Graphics Clock & Reset Controller
maintainers: maintainers:
- Taniya Das <quic_tdas@quicinc.com> - Taniya Das <quic_tdas@quicinc.com>
- Imran Shaik <quic_imrashai@quicinc.com>
description: | description: |
Qualcomm graphics clock control module provides the clocks, resets and power Qualcomm graphics clock control module provides the clocks, resets and power
@ -23,10 +24,12 @@ description: |
include/dt-bindings/clock/qcom,gpucc-sm8150.h include/dt-bindings/clock/qcom,gpucc-sm8150.h
include/dt-bindings/clock/qcom,gpucc-sm8250.h include/dt-bindings/clock/qcom,gpucc-sm8250.h
include/dt-bindings/clock/qcom,gpucc-sm8350.h include/dt-bindings/clock/qcom,gpucc-sm8350.h
include/dt-bindings/clock/qcom,qcs8300-gpucc.h
properties: properties:
compatible: compatible:
enum: enum:
- qcom,qcs8300-gpucc
- qcom,sdm845-gpucc - qcom,sdm845-gpucc
- qcom,sa8775p-gpucc - qcom,sa8775p-gpucc
- qcom,sc7180-gpucc - qcom,sc7180-gpucc

View file

@ -8,16 +8,20 @@ title: Qualcomm Camera Clock & Reset Controller on SA8775P
maintainers: maintainers:
- Taniya Das <quic_tdas@quicinc.com> - Taniya Das <quic_tdas@quicinc.com>
- Imran Shaik <quic_imrashai@quicinc.com>
description: | description: |
Qualcomm camera clock control module provides the clocks, resets and power Qualcomm camera clock control module provides the clocks, resets and power
domains on SA8775p. domains on SA8775p.
See also: include/dt-bindings/clock/qcom,sa8775p-camcc.h See also:
include/dt-bindings/clock/qcom,qcs8300-camcc.h
include/dt-bindings/clock/qcom,sa8775p-camcc.h
properties: properties:
compatible: compatible:
enum: enum:
- qcom,qcs8300-camcc
- qcom,sa8775p-camcc - qcom,sa8775p-camcc
clocks: clocks:

View file

@ -18,6 +18,7 @@ description: |
properties: properties:
compatible: compatible:
enum: enum:
- qcom,qcs8300-videocc
- qcom,sa8775p-videocc - qcom,sa8775p-videocc
clocks: clocks:

View file

@ -0,0 +1,29 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/powertip,hx8238a.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Powertip Electronic Technology Co. 320 x 240 LCD panel
maintainers:
- Lukasz Majewski <lukma@denx.de>
allOf:
- $ref: panel-dpi.yaml#
properties:
compatible:
items:
- const: powertip,hx8238a
- {} # panel-dpi, but not listed here to avoid false select
height-mm: true
panel-timing: true
port: true
power-supply: true
width-mm: true
additionalProperties: false
...

View file

@ -0,0 +1,29 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/powertip,st7272.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Powertip Electronic Technology Co. 320 x 240 LCD panel
maintainers:
- Lukasz Majewski <lukma@denx.de>
allOf:
- $ref: panel-dpi.yaml#
properties:
compatible:
items:
- const: powertip,st7272
- {} # panel-dpi, but not listed here to avoid false select
height-mm: true
panel-timing: true
port: true
power-supply: true
width-mm: true
additionalProperties: false
...

View file

@ -23,7 +23,7 @@ properties:
compatible: compatible:
enum: enum:
- ti,am625-dss - ti,am625-dss
- ti,am62a7,dss - ti,am62a7-dss
- ti,am65x-dss - ti,am65x-dss
reg: reg:

View file

@ -14,9 +14,8 @@ allOf:
description: | description: |
The Microchip LAN966x outband interrupt controller (OIC) maps the internal The Microchip LAN966x outband interrupt controller (OIC) maps the internal
interrupt sources of the LAN966x device to an external interrupt. interrupt sources of the LAN966x device to a PCI interrupt when the LAN966x
When the LAN966x device is used as a PCI device, the external interrupt is device is used as a PCI device.
routed to the PCI interrupt.
properties: properties:
compatible: compatible:

View file

@ -33,6 +33,10 @@ properties:
clocks: clocks:
maxItems: 1 maxItems: 1
clock-names:
items:
- const: nf_clk
dmas: dmas:
maxItems: 1 maxItems: 1
@ -51,6 +55,7 @@ required:
- reg-names - reg-names
- interrupts - interrupts
- clocks - clocks
- clock-names
unevaluatedProperties: false unevaluatedProperties: false
@ -66,7 +71,8 @@ examples:
#address-cells = <1>; #address-cells = <1>;
#size-cells = <0>; #size-cells = <0>;
interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&nf_clk>; clocks = <&clk>;
clock-names = "nf_clk";
cdns,board-delay-ps = <4830>; cdns,board-delay-ps = <4830>;
nand@0 { nand@0 {

View file

@ -7,7 +7,6 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Technologies ath10k wireless devices title: Qualcomm Technologies ath10k wireless devices
maintainers: maintainers:
- Kalle Valo <kvalo@kernel.org>
- Jeff Johnson <jjohnson@kernel.org> - Jeff Johnson <jjohnson@kernel.org>
description: description:

View file

@ -8,7 +8,6 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Technologies ath11k wireless devices (PCIe) title: Qualcomm Technologies ath11k wireless devices (PCIe)
maintainers: maintainers:
- Kalle Valo <kvalo@kernel.org>
- Jeff Johnson <jjohnson@kernel.org> - Jeff Johnson <jjohnson@kernel.org>
description: | description: |

View file

@ -8,7 +8,6 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Technologies ath11k wireless devices title: Qualcomm Technologies ath11k wireless devices
maintainers: maintainers:
- Kalle Valo <kvalo@kernel.org>
- Jeff Johnson <jjohnson@kernel.org> - Jeff Johnson <jjohnson@kernel.org>
description: | description: |

View file

@ -9,7 +9,6 @@ title: Qualcomm Technologies ath12k wireless devices (PCIe) with WSI interface
maintainers: maintainers:
- Jeff Johnson <jjohnson@kernel.org> - Jeff Johnson <jjohnson@kernel.org>
- Kalle Valo <kvalo@kernel.org>
description: | description: |
Qualcomm Technologies IEEE 802.11be PCIe devices with WSI interface. Qualcomm Technologies IEEE 802.11be PCIe devices with WSI interface.

View file

@ -9,7 +9,6 @@ title: Qualcomm Technologies ath12k wireless devices (PCIe)
maintainers: maintainers:
- Jeff Johnson <quic_jjohnson@quicinc.com> - Jeff Johnson <quic_jjohnson@quicinc.com>
- Kalle Valo <kvalo@kernel.org>
description: description:
Qualcomm Technologies IEEE 802.11be PCIe devices. Qualcomm Technologies IEEE 802.11be PCIe devices.

View file

@ -36,6 +36,7 @@ properties:
- qcom,qcs404-qfprom - qcom,qcs404-qfprom
- qcom,qcs615-qfprom - qcom,qcs615-qfprom
- qcom,qcs8300-qfprom - qcom,qcs8300-qfprom
- qcom,sar2130p-qfprom
- qcom,sc7180-qfprom - qcom,sc7180-qfprom
- qcom,sc7280-qfprom - qcom,sc7280-qfprom
- qcom,sc8280xp-qfprom - qcom,sc8280xp-qfprom

View file

@ -22,7 +22,7 @@ description:
Each sub-node is identified using the node's name, with valid values listed Each sub-node is identified using the node's name, with valid values listed
for each of the pmics below. for each of the pmics below.
For mp5496, s1, s2 For mp5496, s1, s2, l2, l5
For pm2250, s1, s2, s3, s4, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, For pm2250, s1, s2, s3, s4, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11,
l12, l13, l14, l15, l16, l17, l18, l19, l20, l21, l22 l12, l13, l14, l15, l16, l17, l18, l19, l20, l21, l22

View file

@ -41,6 +41,12 @@ Device Drivers Base
.. kernel-doc:: drivers/base/class.c .. kernel-doc:: drivers/base/class.c
:export: :export:
.. kernel-doc:: include/linux/device/faux.h
:internal:
.. kernel-doc:: drivers/base/faux.c
:export:
.. kernel-doc:: drivers/base/node.c .. kernel-doc:: drivers/base/node.c
:internal: :internal:

View file

@ -0,0 +1,98 @@
Submitting patches to bcachefs:
===============================
Patches must be tested before being submitted, either with the xfstests suite
[0], or the full bcachefs test suite in ktest [1], depending on what's being
touched. Note that ktest wraps xfstests and will be an easier method to running
it for most users; it includes single-command wrappers for all the mainstream
in-kernel local filesystems.
Patches will undergo more testing after being merged (including
lockdep/kasan/preempt/etc. variants), these are not generally required to be
run by the submitter - but do put some thought into what you're changing and
which tests might be relevant, e.g. are you dealing with tricky memory layout
work? kasan, are you doing locking work? then lockdep; and ktest includes
single-command variants for the debug build types you'll most likely need.
The exception to this rule is incomplete WIP/RFC patches: if you're working on
something nontrivial, it's encouraged to send out a WIP patch to let people
know what you're doing and make sure you're on the right track. Just make sure
it includes a brief note as to what's done and what's incomplete, to avoid
confusion.
Rigorous checkpatch.pl adherence is not required (many of its warnings are
considered out of date), but try not to deviate too much without reason.
Focus on writing code that reads well and is organized well; code should be
aesthetically pleasing.
CI:
===
Instead of running your tests locally, when running the full test suite it's
prefereable to let a server farm do it in parallel, and then have the results
in a nice test dashboard (which can tell you which failures are new, and
presents results in a git log view, avoiding the need for most bisecting).
That exists [2], and community members may request an account. If you work for
a big tech company, you'll need to help out with server costs to get access -
but the CI is not restricted to running bcachefs tests: it runs any ktest test
(which generally makes it easy to wrap other tests that can run in qemu).
Other things to think about:
============================
- How will we debug this code? Is there sufficient introspection to diagnose
when something starts acting wonky on a user machine?
We don't necessarily need every single field of every data structure visible
with introspection, but having the important fields of all the core data
types wired up makes debugging drastically easier - a bit of thoughtful
foresight greatly reduces the need to have people build custom kernels with
debug patches.
More broadly, think about all the debug tooling that might be needed.
- Does it make the codebase more or less of a mess? Can we also try to do some
organizing, too?
- Do new tests need to be written? New assertions? How do we know and verify
that the code is correct, and what happens if something goes wrong?
We don't yet have automated code coverage analysis or easy fault injection -
but for now, pretend we did and ask what they might tell us.
Assertions are hugely important, given that we don't yet have a systems
language that can do ergonomic embedded correctness proofs. Hitting an assert
in testing is much better than wandering off into undefined behaviour la-la
land - use them. Use them judiciously, and not as a replacement for proper
error handling, but use them.
- Does it need to be performance tested? Should we add new peformance counters?
bcachefs has a set of persistent runtime counters which can be viewed with
the 'bcachefs fs top' command; this should give users a basic idea of what
their filesystem is currently doing. If you're doing a new feature or looking
at old code, think if anything should be added.
- If it's a new on disk format feature - have upgrades and downgrades been
tested? (Automated tests exists but aren't in the CI, due to the hassle of
disk image management; coordinate to have them run.)
Mailing list, IRC:
==================
Patches should hit the list [3], but much discussion and code review happens on
IRC as well [4]; many people appreciate the more conversational approach and
quicker feedback.
Additionally, we have a lively user community doing excellent QA work, which
exists primarily on IRC. Please make use of that resource; user feedback is
important for any nontrivial feature, and documenting it in commit messages
would be a good idea.
[0]: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
[1]: https://evilpiepirate.org/git/ktest.git/
[2]: https://evilpiepirate.org/~testdashboard/ci/
[3]: linux-bcachefs@vger.kernel.org
[4]: irc.oftc.net#bcache, #bcachefs-dev

View file

@ -9,4 +9,5 @@ bcachefs Documentation
:numbered: :numbered:
CodingStyle CodingStyle
SubmittingPatches
errorcodes errorcodes

View file

@ -1524,7 +1524,8 @@ attribute-sets:
nested-attributes: bitset nested-attributes: bitset
- -
name: hwtstamp-flags name: hwtstamp-flags
type: u32 type: nest
nested-attributes: bitset
operations: operations:
enum-model: directional enum-model: directional

View file

@ -369,8 +369,8 @@ to their default.
addr.can_family = AF_CAN; addr.can_family = AF_CAN;
addr.can_ifindex = if_nametoindex("can0"); addr.can_ifindex = if_nametoindex("can0");
addr.tp.tx_id = 0x18DA42F1 | CAN_EFF_FLAG; addr.can_addr.tp.tx_id = 0x18DA42F1 | CAN_EFF_FLAG;
addr.tp.rx_id = 0x18DAF142 | CAN_EFF_FLAG; addr.can_addr.tp.rx_id = 0x18DAF142 | CAN_EFF_FLAG;
ret = bind(s, (struct sockaddr *)&addr, sizeof(addr)); ret = bind(s, (struct sockaddr *)&addr, sizeof(addr));
if (ret < 0) if (ret < 0)

View file

@ -112,7 +112,7 @@ Functions
Callbacks Callbacks
========= =========
There are six callbacks: There are seven callbacks:
:: ::
@ -182,6 +182,13 @@ There are six callbacks:
the length of the message. skb->len - offset may be greater the length of the message. skb->len - offset may be greater
then full_len since strparser does not trim the skb. then full_len since strparser does not trim the skb.
::
int (*read_sock)(struct strparser *strp, read_descriptor_t *desc,
sk_read_actor_t recv_actor);
The read_sock callback is used by strparser instead of
sock->ops->read_sock, if provided.
:: ::
int (*read_sock_done)(struct strparser *strp, int err); int (*read_sock_done)(struct strparser *strp, int err);

View file

@ -308,7 +308,7 @@ an involved disclosed party. The current ambassadors list:
Google Kees Cook <keescook@chromium.org> Google Kees Cook <keescook@chromium.org>
LLVM Nick Desaulniers <ndesaulniers@google.com> LLVM Nick Desaulniers <nick.desaulniers+lkml@gmail.com>
============= ======================================================== ============= ========================================================
If you want your organization to be added to the ambassadors list, please If you want your organization to be added to the ambassadors list, please

View file

@ -287,7 +287,7 @@ revelada involucrada. La lista de embajadores actuales:
Google Kees Cook <keescook@chromium.org> Google Kees Cook <keescook@chromium.org>
LLVM Nick Desaulniers <ndesaulniers@google.com> LLVM Nick Desaulniers <nick.desaulniers+lkml@gmail.com>
============= ======================================================== ============= ========================================================
Si quiere que su organización se añada a la lista de embajadores, por Si quiere que su organización se añada a la lista de embajadores, por

View file

@ -1419,7 +1419,7 @@ fetch) is injected in the guest.
S390: S390:
^^^^^ ^^^^^
Returns -EINVAL if the VM has the KVM_VM_S390_UCONTROL flag set. Returns -EINVAL or -EEXIST if the VM has the KVM_VM_S390_UCONTROL flag set.
Returns -EINVAL if called on a protected VM. Returns -EINVAL if called on a protected VM.
4.36 KVM_SET_TSS_ADDR 4.36 KVM_SET_TSS_ADDR

View file

@ -2209,8 +2209,8 @@ F: sound/soc/codecs/cs42l84.*
F: sound/soc/codecs/ssm3515.c F: sound/soc/codecs/ssm3515.c
ARM/APPLE MACHINE SUPPORT ARM/APPLE MACHINE SUPPORT
M: Hector Martin <marcan@marcan.st>
M: Sven Peter <sven@svenpeter.dev> M: Sven Peter <sven@svenpeter.dev>
M: Janne Grunau <j@jannau.net>
R: Alyssa Rosenzweig <alyssa@rosenzweig.io> R: Alyssa Rosenzweig <alyssa@rosenzweig.io>
L: asahi@lists.linux.dev L: asahi@lists.linux.dev
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -2285,7 +2285,7 @@ F: drivers/irqchip/irq-aspeed-i2c-ic.c
ARM/ASPEED MACHINE SUPPORT ARM/ASPEED MACHINE SUPPORT
M: Joel Stanley <joel@jms.id.au> M: Joel Stanley <joel@jms.id.au>
R: Andrew Jeffery <andrew@codeconstruct.com.au> M: Andrew Jeffery <andrew@codeconstruct.com.au>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers) L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers)
S: Supported S: Supported
@ -3655,7 +3655,6 @@ F: Documentation/devicetree/bindings/phy/phy-ath79-usb.txt
F: drivers/phy/qualcomm/phy-ath79-usb.c F: drivers/phy/qualcomm/phy-ath79-usb.c
ATHEROS ATH GENERIC UTILITIES ATHEROS ATH GENERIC UTILITIES
M: Kalle Valo <kvalo@kernel.org>
M: Jeff Johnson <jjohnson@kernel.org> M: Jeff Johnson <jjohnson@kernel.org>
L: linux-wireless@vger.kernel.org L: linux-wireless@vger.kernel.org
S: Supported S: Supported
@ -3860,13 +3859,6 @@ W: https://ez.analog.com/linux-software-drivers
F: Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml F: Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml
F: drivers/pwm/pwm-axi-pwmgen.c F: drivers/pwm/pwm-axi-pwmgen.c
AXXIA I2C CONTROLLER
M: Krzysztof Adamski <krzysztof.adamski@nokia.com>
L: linux-i2c@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/i2c/i2c-axxia.txt
F: drivers/i2c/busses/i2c-axxia.c
AZ6007 DVB DRIVER AZ6007 DVB DRIVER
M: Mauro Carvalho Chehab <mchehab@kernel.org> M: Mauro Carvalho Chehab <mchehab@kernel.org>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
@ -3955,6 +3947,7 @@ M: Kent Overstreet <kent.overstreet@linux.dev>
L: linux-bcachefs@vger.kernel.org L: linux-bcachefs@vger.kernel.org
S: Supported S: Supported
C: irc://irc.oftc.net/bcache C: irc://irc.oftc.net/bcache
P: Documentation/filesystems/bcachefs/SubmittingPatches.rst
T: git https://evilpiepirate.org/git/bcachefs.git T: git https://evilpiepirate.org/git/bcachefs.git
F: fs/bcachefs/ F: fs/bcachefs/
F: Documentation/filesystems/bcachefs/ F: Documentation/filesystems/bcachefs/
@ -5663,7 +5656,7 @@ F: .clang-format
CLANG/LLVM BUILD SUPPORT CLANG/LLVM BUILD SUPPORT
M: Nathan Chancellor <nathan@kernel.org> M: Nathan Chancellor <nathan@kernel.org>
R: Nick Desaulniers <ndesaulniers@google.com> R: Nick Desaulniers <nick.desaulniers+lkml@gmail.com>
R: Bill Wendling <morbo@google.com> R: Bill Wendling <morbo@google.com>
R: Justin Stitt <justinstitt@google.com> R: Justin Stitt <justinstitt@google.com>
L: llvm@lists.linux.dev L: llvm@lists.linux.dev
@ -7116,8 +7109,10 @@ F: rust/kernel/device.rs
F: rust/kernel/device_id.rs F: rust/kernel/device_id.rs
F: rust/kernel/devres.rs F: rust/kernel/devres.rs
F: rust/kernel/driver.rs F: rust/kernel/driver.rs
F: rust/kernel/faux.rs
F: rust/kernel/platform.rs F: rust/kernel/platform.rs
F: samples/rust/rust_driver_platform.rs F: samples/rust/rust_driver_platform.rs
F: samples/rust/rust_driver_faux.rs
DRIVERS FOR OMAP ADAPTIVE VOLTAGE SCALING (AVS) DRIVERS FOR OMAP ADAPTIVE VOLTAGE SCALING (AVS)
M: Nishanth Menon <nm@ti.com> M: Nishanth Menon <nm@ti.com>
@ -7431,7 +7426,6 @@ F: Documentation/devicetree/bindings/display/panel/novatek,nt36672a.yaml
F: drivers/gpu/drm/panel/panel-novatek-nt36672a.c F: drivers/gpu/drm/panel/panel-novatek-nt36672a.c
DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS
M: Karol Herbst <kherbst@redhat.com>
M: Lyude Paul <lyude@redhat.com> M: Lyude Paul <lyude@redhat.com>
M: Danilo Krummrich <dakr@kernel.org> M: Danilo Krummrich <dakr@kernel.org>
L: dri-devel@lists.freedesktop.org L: dri-devel@lists.freedesktop.org
@ -9418,7 +9412,7 @@ F: fs/freevxfs/
FREEZER FREEZER
M: "Rafael J. Wysocki" <rafael@kernel.org> M: "Rafael J. Wysocki" <rafael@kernel.org>
M: Pavel Machek <pavel@ucw.cz> M: Pavel Machek <pavel@kernel.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Supported S: Supported
F: Documentation/power/freezing-of-tasks.rst F: Documentation/power/freezing-of-tasks.rst
@ -9835,8 +9829,7 @@ F: drivers/input/touchscreen/goodix*
GOOGLE ETHERNET DRIVERS GOOGLE ETHERNET DRIVERS
M: Jeroen de Borst <jeroendb@google.com> M: Jeroen de Borst <jeroendb@google.com>
M: Praveen Kaligineedi <pkaligineedi@google.com> M: Harshitha Ramamurthy <hramamurthy@google.com>
R: Shailend Chand <shailend@google.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/networking/device_drivers/ethernet/google/gve.rst F: Documentation/networking/device_drivers/ethernet/google/gve.rst
@ -9878,7 +9871,7 @@ S: Maintained
F: drivers/staging/gpib/ F: drivers/staging/gpib/
GPIO ACPI SUPPORT GPIO ACPI SUPPORT
M: Mika Westerberg <mika.westerberg@linux.intel.com> M: Mika Westerberg <westeri@kernel.org>
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
L: linux-gpio@vger.kernel.org L: linux-gpio@vger.kernel.org
L: linux-acpi@vger.kernel.org L: linux-acpi@vger.kernel.org
@ -10253,7 +10246,7 @@ F: drivers/video/fbdev/hgafb.c
HIBERNATION (aka Software Suspend, aka swsusp) HIBERNATION (aka Software Suspend, aka swsusp)
M: "Rafael J. Wysocki" <rafael@kernel.org> M: "Rafael J. Wysocki" <rafael@kernel.org>
M: Pavel Machek <pavel@ucw.cz> M: Pavel Machek <pavel@kernel.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Supported S: Supported
B: https://bugzilla.kernel.org B: https://bugzilla.kernel.org
@ -10822,7 +10815,7 @@ S: Odd Fixes
F: drivers/tty/hvc/ F: drivers/tty/hvc/
I2C ACPI SUPPORT I2C ACPI SUPPORT
M: Mika Westerberg <mika.westerberg@linux.intel.com> M: Mika Westerberg <westeri@kernel.org>
L: linux-i2c@vger.kernel.org L: linux-i2c@vger.kernel.org
L: linux-acpi@vger.kernel.org L: linux-acpi@vger.kernel.org
S: Maintained S: Maintained
@ -13124,8 +13117,8 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/har
F: scripts/leaking_addresses.pl F: scripts/leaking_addresses.pl
LED SUBSYSTEM LED SUBSYSTEM
M: Pavel Machek <pavel@ucw.cz>
M: Lee Jones <lee@kernel.org> M: Lee Jones <lee@kernel.org>
M: Pavel Machek <pavel@kernel.org>
L: linux-leds@vger.kernel.org L: linux-leds@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lee/leds.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/lee/leds.git
@ -16438,7 +16431,7 @@ X: drivers/net/can/
X: drivers/net/wireless/ X: drivers/net/wireless/
NETWORKING DRIVERS (WIRELESS) NETWORKING DRIVERS (WIRELESS)
M: Kalle Valo <kvalo@kernel.org> M: Johannes Berg <johannes@sipsolutions.net>
L: linux-wireless@vger.kernel.org L: linux-wireless@vger.kernel.org
S: Maintained S: Maintained
W: https://wireless.wiki.kernel.org/ W: https://wireless.wiki.kernel.org/
@ -16462,6 +16455,28 @@ F: include/net/dsa.h
F: net/dsa/ F: net/dsa/
F: tools/testing/selftests/drivers/net/dsa/ F: tools/testing/selftests/drivers/net/dsa/
NETWORKING [ETHTOOL]
M: Andrew Lunn <andrew@lunn.ch>
M: Jakub Kicinski <kuba@kernel.org>
F: Documentation/netlink/specs/ethtool.yaml
F: Documentation/networking/ethtool-netlink.rst
F: include/linux/ethtool*
F: include/uapi/linux/ethtool*
F: net/ethtool/
F: tools/testing/selftests/drivers/net/*/ethtool*
NETWORKING [ETHTOOL CABLE TEST]
M: Andrew Lunn <andrew@lunn.ch>
F: net/ethtool/cabletest.c
F: tools/testing/selftests/drivers/net/*/ethtool*
K: cable_test
NETWORKING [ETHTOOL MAC MERGE]
M: Vladimir Oltean <vladimir.oltean@nxp.com>
F: net/ethtool/mm.c
F: tools/testing/selftests/drivers/net/hw/ethtool_mm.sh
K: ethtool_mm
NETWORKING [GENERAL] NETWORKING [GENERAL]
M: "David S. Miller" <davem@davemloft.net> M: "David S. Miller" <davem@davemloft.net>
M: Eric Dumazet <edumazet@google.com> M: Eric Dumazet <edumazet@google.com>
@ -16493,6 +16508,7 @@ F: include/linux/netdev*
F: include/linux/netlink.h F: include/linux/netlink.h
F: include/linux/netpoll.h F: include/linux/netpoll.h
F: include/linux/rtnetlink.h F: include/linux/rtnetlink.h
F: include/linux/sctp.h
F: include/linux/seq_file_net.h F: include/linux/seq_file_net.h
F: include/linux/skbuff* F: include/linux/skbuff*
F: include/net/ F: include/net/
@ -16509,6 +16525,7 @@ F: include/uapi/linux/netdev*
F: include/uapi/linux/netlink.h F: include/uapi/linux/netlink.h
F: include/uapi/linux/netlink_diag.h F: include/uapi/linux/netlink_diag.h
F: include/uapi/linux/rtnetlink.h F: include/uapi/linux/rtnetlink.h
F: include/uapi/linux/sctp.h
F: lib/net_utils.c F: lib/net_utils.c
F: lib/random32.c F: lib/random32.c
F: net/ F: net/
@ -16621,6 +16638,7 @@ F: tools/testing/selftests/net/mptcp/
NETWORKING [TCP] NETWORKING [TCP]
M: Eric Dumazet <edumazet@google.com> M: Eric Dumazet <edumazet@google.com>
M: Neal Cardwell <ncardwell@google.com> M: Neal Cardwell <ncardwell@google.com>
R: Kuniyuki Iwashima <kuniyu@amazon.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/networking/net_cachelines/tcp_sock.rst F: Documentation/networking/net_cachelines/tcp_sock.rst
@ -16648,6 +16666,31 @@ F: include/net/tls.h
F: include/uapi/linux/tls.h F: include/uapi/linux/tls.h
F: net/tls/* F: net/tls/*
NETWORKING [SOCKETS]
M: Eric Dumazet <edumazet@google.com>
M: Kuniyuki Iwashima <kuniyu@amazon.com>
M: Paolo Abeni <pabeni@redhat.com>
M: Willem de Bruijn <willemb@google.com>
S: Maintained
F: include/linux/sock_diag.h
F: include/linux/socket.h
F: include/linux/sockptr.h
F: include/net/sock.h
F: include/net/sock_reuseport.h
F: include/uapi/linux/socket.h
F: net/core/*sock*
F: net/core/scm.c
F: net/socket.c
NETWORKING [UNIX SOCKETS]
M: Kuniyuki Iwashima <kuniyu@amazon.com>
S: Maintained
F: include/net/af_unix.h
F: include/net/netns/unix.h
F: include/uapi/linux/unix_diag.h
F: net/unix/
F: tools/testing/selftests/net/af_unix/
NETXEN (1/10) GbE SUPPORT NETXEN (1/10) GbE SUPPORT
M: Manish Chopra <manishc@marvell.com> M: Manish Chopra <manishc@marvell.com>
M: Rahul Verma <rahulv@marvell.com> M: Rahul Verma <rahulv@marvell.com>
@ -16781,7 +16824,7 @@ F: include/linux/tick.h
F: kernel/time/tick*.* F: kernel/time/tick*.*
NOKIA N900 CAMERA SUPPORT (ET8EK8 SENSOR, AD5820 FOCUS) NOKIA N900 CAMERA SUPPORT (ET8EK8 SENSOR, AD5820 FOCUS)
M: Pavel Machek <pavel@ucw.cz> M: Pavel Machek <pavel@kernel.org>
M: Sakari Ailus <sakari.ailus@iki.fi> M: Sakari Ailus <sakari.ailus@iki.fi>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
S: Maintained S: Maintained
@ -17713,6 +17756,7 @@ L: netdev@vger.kernel.org
L: dev@openvswitch.org L: dev@openvswitch.org
S: Maintained S: Maintained
W: http://openvswitch.org W: http://openvswitch.org
F: Documentation/networking/openvswitch.rst
F: include/uapi/linux/openvswitch.h F: include/uapi/linux/openvswitch.h
F: net/openvswitch/ F: net/openvswitch/
F: tools/testing/selftests/net/openvswitch/ F: tools/testing/selftests/net/openvswitch/
@ -19312,7 +19356,6 @@ Q: http://patchwork.linuxtv.org/project/linux-media/list/
F: drivers/media/tuners/qt1010* F: drivers/media/tuners/qt1010*
QUALCOMM ATH12K WIRELESS DRIVER QUALCOMM ATH12K WIRELESS DRIVER
M: Kalle Valo <kvalo@kernel.org>
M: Jeff Johnson <jjohnson@kernel.org> M: Jeff Johnson <jjohnson@kernel.org>
L: ath12k@lists.infradead.org L: ath12k@lists.infradead.org
S: Supported S: Supported
@ -19322,7 +19365,6 @@ F: drivers/net/wireless/ath/ath12k/
N: ath12k N: ath12k
QUALCOMM ATHEROS ATH10K WIRELESS DRIVER QUALCOMM ATHEROS ATH10K WIRELESS DRIVER
M: Kalle Valo <kvalo@kernel.org>
M: Jeff Johnson <jjohnson@kernel.org> M: Jeff Johnson <jjohnson@kernel.org>
L: ath10k@lists.infradead.org L: ath10k@lists.infradead.org
S: Supported S: Supported
@ -19332,7 +19374,6 @@ F: drivers/net/wireless/ath/ath10k/
N: ath10k N: ath10k
QUALCOMM ATHEROS ATH11K WIRELESS DRIVER QUALCOMM ATHEROS ATH11K WIRELESS DRIVER
M: Kalle Valo <kvalo@kernel.org>
M: Jeff Johnson <jjohnson@kernel.org> M: Jeff Johnson <jjohnson@kernel.org>
L: ath11k@lists.infradead.org L: ath11k@lists.infradead.org
S: Supported S: Supported
@ -19467,6 +19508,15 @@ L: dmaengine@vger.kernel.org
S: Supported S: Supported
F: drivers/dma/qcom/hidma* F: drivers/dma/qcom/hidma*
QUALCOMM I2C QCOM GENI DRIVER
M: Mukesh Kumar Savaliya <quic_msavaliy@quicinc.com>
M: Viken Dadhaniya <quic_vdadhani@quicinc.com>
L: linux-i2c@vger.kernel.org
L: linux-arm-msm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/i2c/qcom,i2c-geni-qcom.yaml
F: drivers/i2c/busses/i2c-qcom-geni.c
QUALCOMM I2C CCI DRIVER QUALCOMM I2C CCI DRIVER
M: Loic Poulain <loic.poulain@linaro.org> M: Loic Poulain <loic.poulain@linaro.org>
M: Robert Foss <rfoss@kernel.org> M: Robert Foss <rfoss@kernel.org>
@ -19829,7 +19879,7 @@ F: net/rds/
F: tools/testing/selftests/net/rds/ F: tools/testing/selftests/net/rds/
RDT - RESOURCE ALLOCATION RDT - RESOURCE ALLOCATION
M: Fenghua Yu <fenghua.yu@intel.com> M: Tony Luck <tony.luck@intel.com>
M: Reinette Chatre <reinette.chatre@intel.com> M: Reinette Chatre <reinette.chatre@intel.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Supported S: Supported
@ -22806,7 +22856,7 @@ F: drivers/sh/
SUSPEND TO RAM SUSPEND TO RAM
M: "Rafael J. Wysocki" <rafael@kernel.org> M: "Rafael J. Wysocki" <rafael@kernel.org>
M: Len Brown <len.brown@intel.com> M: Len Brown <len.brown@intel.com>
M: Pavel Machek <pavel@ucw.cz> M: Pavel Machek <pavel@kernel.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Supported S: Supported
B: https://bugzilla.kernel.org B: https://bugzilla.kernel.org
@ -24019,7 +24069,6 @@ F: tools/testing/selftests/ftrace/
TRACING MMIO ACCESSES (MMIOTRACE) TRACING MMIO ACCESSES (MMIOTRACE)
M: Steven Rostedt <rostedt@goodmis.org> M: Steven Rostedt <rostedt@goodmis.org>
M: Masami Hiramatsu <mhiramat@kernel.org> M: Masami Hiramatsu <mhiramat@kernel.org>
R: Karol Herbst <karolherbst@gmail.com>
R: Pekka Paalanen <ppaalanen@gmail.com> R: Pekka Paalanen <ppaalanen@gmail.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
L: nouveau@lists.freedesktop.org L: nouveau@lists.freedesktop.org

View file

@ -2,7 +2,7 @@
VERSION = 6 VERSION = 6
PATCHLEVEL = 14 PATCHLEVEL = 14
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc1 EXTRAVERSION = -rc4
NAME = Baby Opossum Posse NAME = Baby Opossum Posse
# *DOCUMENTATION* # *DOCUMENTATION*
@ -1120,8 +1120,8 @@ LDFLAGS_vmlinux += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL)
endif endif
# Align the bit size of userspace programs with the kernel # Align the bit size of userspace programs with the kernel
KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS)) KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS)) KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
# make the checker run with the right architecture # make the checker run with the right architecture
CHECKFLAGS += --arch=$(ARCH) CHECKFLAGS += --arch=$(ARCH)
@ -1421,18 +1421,13 @@ ifneq ($(wildcard $(resolve_btfids_O)),)
$(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean $(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean
endif endif
# Clear a bunch of variables before executing the submake
ifeq ($(quiet),silent_)
tools_silent=s
endif
tools/: FORCE tools/: FORCE
$(Q)mkdir -p $(objtree)/tools $(Q)mkdir -p $(objtree)/tools
$(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(tools_silent) $(filter --j% -j,$(MAKEFLAGS))" O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/ $(Q)$(MAKE) LDFLAGS= O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/
tools/%: FORCE tools/%: FORCE
$(Q)mkdir -p $(objtree)/tools $(Q)mkdir -p $(objtree)/tools
$(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(tools_silent) $(filter --j% -j,$(MAKEFLAGS))" O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/ $* $(Q)$(MAKE) LDFLAGS= O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/ $*
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Kernel selftest # Kernel selftest

View file

@ -74,7 +74,7 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
/* /*
* This is used to ensure we don't load something for the wrong architecture. * This is used to ensure we don't load something for the wrong architecture.
*/ */
#define elf_check_arch(x) ((x)->e_machine == EM_ALPHA) #define elf_check_arch(x) (((x)->e_machine == EM_ALPHA) && !((x)->e_flags & EF_ALPHA_32BIT))
/* /*
* These are used to set parameters in the core dumps. * These are used to set parameters in the core dumps.
@ -137,10 +137,6 @@ extern int dump_elf_task(elf_greg_t *dest, struct task_struct *task);
: amask (AMASK_CIX) ? "ev6" : "ev67"); \ : amask (AMASK_CIX) ? "ev6" : "ev67"); \
}) })
#define SET_PERSONALITY(EX) \
set_personality(((EX).e_flags & EF_ALPHA_32BIT) \
? PER_LINUX_32BIT : PER_LINUX)
extern int alpha_l1i_cacheshape; extern int alpha_l1i_cacheshape;
extern int alpha_l1d_cacheshape; extern int alpha_l1d_cacheshape;
extern int alpha_l2_cacheshape; extern int alpha_l2_cacheshape;

View file

@ -135,7 +135,7 @@ struct crb_struct {
/* virtual->physical map */ /* virtual->physical map */
unsigned long map_entries; unsigned long map_entries;
unsigned long map_pages; unsigned long map_pages;
struct vf_map_struct map[1]; struct vf_map_struct map[];
}; };
struct memclust_struct { struct memclust_struct {

View file

@ -360,7 +360,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
extern void paging_init(void); extern void paging_init(void);
/* We have our own get_unmapped_area to cope with ADDR_LIMIT_32BIT. */ /* We have our own get_unmapped_area */
#define HAVE_ARCH_UNMAPPED_AREA #define HAVE_ARCH_UNMAPPED_AREA
#endif /* _ALPHA_PGTABLE_H */ #endif /* _ALPHA_PGTABLE_H */

View file

@ -8,23 +8,19 @@
#ifndef __ASM_ALPHA_PROCESSOR_H #ifndef __ASM_ALPHA_PROCESSOR_H
#define __ASM_ALPHA_PROCESSOR_H #define __ASM_ALPHA_PROCESSOR_H
#include <linux/personality.h> /* for ADDR_LIMIT_32BIT */
/* /*
* We have a 42-bit user address space: 4TB user VM... * We have a 42-bit user address space: 4TB user VM...
*/ */
#define TASK_SIZE (0x40000000000UL) #define TASK_SIZE (0x40000000000UL)
#define STACK_TOP \ #define STACK_TOP (0x00120000000UL)
(current->personality & ADDR_LIMIT_32BIT ? 0x80000000 : 0x00120000000UL)
#define STACK_TOP_MAX 0x00120000000UL #define STACK_TOP_MAX 0x00120000000UL
/* This decides where the kernel will search for a free chunk of vm /* This decides where the kernel will search for a free chunk of vm
* space during mmap's. * space during mmap's.
*/ */
#define TASK_UNMAPPED_BASE \ #define TASK_UNMAPPED_BASE (TASK_SIZE / 2)
((current->personality & ADDR_LIMIT_32BIT) ? 0x40000000 : TASK_SIZE / 2)
/* This is dead. Everything has been moved to thread_info. */ /* This is dead. Everything has been moved to thread_info. */
struct thread_struct { }; struct thread_struct { };

View file

@ -42,6 +42,8 @@ struct pt_regs {
unsigned long trap_a0; unsigned long trap_a0;
unsigned long trap_a1; unsigned long trap_a1;
unsigned long trap_a2; unsigned long trap_a2;
/* This makes the stack 16-byte aligned as GCC expects */
unsigned long __pad0;
/* These are saved by PAL-code: */ /* These are saved by PAL-code: */
unsigned long ps; unsigned long ps;
unsigned long pc; unsigned long pc;

View file

@ -19,9 +19,13 @@ static void __used foo(void)
DEFINE(TI_STATUS, offsetof(struct thread_info, status)); DEFINE(TI_STATUS, offsetof(struct thread_info, status));
BLANK(); BLANK();
DEFINE(SP_OFF, offsetof(struct pt_regs, ps));
DEFINE(SIZEOF_PT_REGS, sizeof(struct pt_regs)); DEFINE(SIZEOF_PT_REGS, sizeof(struct pt_regs));
BLANK(); BLANK();
DEFINE(SWITCH_STACK_SIZE, sizeof(struct switch_stack));
BLANK();
DEFINE(HAE_CACHE, offsetof(struct alpha_machine_vector, hae_cache)); DEFINE(HAE_CACHE, offsetof(struct alpha_machine_vector, hae_cache));
DEFINE(HAE_REG, offsetof(struct alpha_machine_vector, hae_register)); DEFINE(HAE_REG, offsetof(struct alpha_machine_vector, hae_register));
} }

View file

@ -15,10 +15,6 @@
.set noat .set noat
.cfi_sections .debug_frame .cfi_sections .debug_frame
/* Stack offsets. */
#define SP_OFF 184
#define SWITCH_STACK_SIZE 64
.macro CFI_START_OSF_FRAME func .macro CFI_START_OSF_FRAME func
.align 4 .align 4
.globl \func .globl \func
@ -198,8 +194,8 @@ CFI_END_OSF_FRAME entArith
CFI_START_OSF_FRAME entMM CFI_START_OSF_FRAME entMM
SAVE_ALL SAVE_ALL
/* save $9 - $15 so the inline exception code can manipulate them. */ /* save $9 - $15 so the inline exception code can manipulate them. */
subq $sp, 56, $sp subq $sp, 64, $sp
.cfi_adjust_cfa_offset 56 .cfi_adjust_cfa_offset 64
stq $9, 0($sp) stq $9, 0($sp)
stq $10, 8($sp) stq $10, 8($sp)
stq $11, 16($sp) stq $11, 16($sp)
@ -214,7 +210,7 @@ CFI_START_OSF_FRAME entMM
.cfi_rel_offset $13, 32 .cfi_rel_offset $13, 32
.cfi_rel_offset $14, 40 .cfi_rel_offset $14, 40
.cfi_rel_offset $15, 48 .cfi_rel_offset $15, 48
addq $sp, 56, $19 addq $sp, 64, $19
/* handle the fault */ /* handle the fault */
lda $8, 0x3fff lda $8, 0x3fff
bic $sp, $8, $8 bic $sp, $8, $8
@ -227,7 +223,7 @@ CFI_START_OSF_FRAME entMM
ldq $13, 32($sp) ldq $13, 32($sp)
ldq $14, 40($sp) ldq $14, 40($sp)
ldq $15, 48($sp) ldq $15, 48($sp)
addq $sp, 56, $sp addq $sp, 64, $sp
.cfi_restore $9 .cfi_restore $9
.cfi_restore $10 .cfi_restore $10
.cfi_restore $11 .cfi_restore $11
@ -235,7 +231,7 @@ CFI_START_OSF_FRAME entMM
.cfi_restore $13 .cfi_restore $13
.cfi_restore $14 .cfi_restore $14
.cfi_restore $15 .cfi_restore $15
.cfi_adjust_cfa_offset -56 .cfi_adjust_cfa_offset -64
/* finish up the syscall as normal. */ /* finish up the syscall as normal. */
br ret_from_sys_call br ret_from_sys_call
CFI_END_OSF_FRAME entMM CFI_END_OSF_FRAME entMM
@ -382,8 +378,8 @@ entUnaUser:
.cfi_restore $0 .cfi_restore $0
.cfi_adjust_cfa_offset -256 .cfi_adjust_cfa_offset -256
SAVE_ALL /* setup normal kernel stack */ SAVE_ALL /* setup normal kernel stack */
lda $sp, -56($sp) lda $sp, -64($sp)
.cfi_adjust_cfa_offset 56 .cfi_adjust_cfa_offset 64
stq $9, 0($sp) stq $9, 0($sp)
stq $10, 8($sp) stq $10, 8($sp)
stq $11, 16($sp) stq $11, 16($sp)
@ -399,7 +395,7 @@ entUnaUser:
.cfi_rel_offset $14, 40 .cfi_rel_offset $14, 40
.cfi_rel_offset $15, 48 .cfi_rel_offset $15, 48
lda $8, 0x3fff lda $8, 0x3fff
addq $sp, 56, $19 addq $sp, 64, $19
bic $sp, $8, $8 bic $sp, $8, $8
jsr $26, do_entUnaUser jsr $26, do_entUnaUser
ldq $9, 0($sp) ldq $9, 0($sp)
@ -409,7 +405,7 @@ entUnaUser:
ldq $13, 32($sp) ldq $13, 32($sp)
ldq $14, 40($sp) ldq $14, 40($sp)
ldq $15, 48($sp) ldq $15, 48($sp)
lda $sp, 56($sp) lda $sp, 64($sp)
.cfi_restore $9 .cfi_restore $9
.cfi_restore $10 .cfi_restore $10
.cfi_restore $11 .cfi_restore $11
@ -417,7 +413,7 @@ entUnaUser:
.cfi_restore $13 .cfi_restore $13
.cfi_restore $14 .cfi_restore $14
.cfi_restore $15 .cfi_restore $15
.cfi_adjust_cfa_offset -56 .cfi_adjust_cfa_offset -64
br ret_from_sys_call br ret_from_sys_call
CFI_END_OSF_FRAME entUna CFI_END_OSF_FRAME entUna

View file

@ -1210,8 +1210,7 @@ SYSCALL_DEFINE1(old_adjtimex, struct timex32 __user *, txc_p)
return ret; return ret;
} }
/* Get an address range which is currently unmapped. Similar to the /* Get an address range which is currently unmapped. */
generic version except that we know how to honor ADDR_LIMIT_32BIT. */
static unsigned long static unsigned long
arch_get_unmapped_area_1(unsigned long addr, unsigned long len, arch_get_unmapped_area_1(unsigned long addr, unsigned long len,
@ -1230,13 +1229,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long len, unsigned long pgoff,
unsigned long flags, vm_flags_t vm_flags) unsigned long flags, vm_flags_t vm_flags)
{ {
unsigned long limit; unsigned long limit = TASK_SIZE;
/* "32 bit" actually means 31 bit, since pointers sign extend. */
if (current->personality & ADDR_LIMIT_32BIT)
limit = 0x80000000;
else
limit = TASK_SIZE;
if (len > limit) if (len > limit)
return -ENOMEM; return -ENOMEM;

View file

@ -13,6 +13,7 @@
#include <linux/log2.h> #include <linux/log2.h>
#include <linux/dma-map-ops.h> #include <linux/dma-map-ops.h>
#include <linux/iommu-helper.h> #include <linux/iommu-helper.h>
#include <linux/string_choices.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
@ -212,7 +213,7 @@ static int pci_dac_dma_supported(struct pci_dev *dev, u64 mask)
/* If both conditions above are met, we are fine. */ /* If both conditions above are met, we are fine. */
DBGA("pci_dac_dma_supported %s from %ps\n", DBGA("pci_dac_dma_supported %s from %ps\n",
ok ? "yes" : "no", __builtin_return_address(0)); str_yes_no(ok), __builtin_return_address(0));
return ok; return ok;
} }

View file

@ -649,7 +649,7 @@ s_reg_to_mem (unsigned long s_reg)
static int unauser_reg_offsets[32] = { static int unauser_reg_offsets[32] = {
R(r0), R(r1), R(r2), R(r3), R(r4), R(r5), R(r6), R(r7), R(r8), R(r0), R(r1), R(r2), R(r3), R(r4), R(r5), R(r6), R(r7), R(r8),
/* r9 ... r15 are stored in front of regs. */ /* r9 ... r15 are stored in front of regs. */
-56, -48, -40, -32, -24, -16, -8, -64, -56, -48, -40, -32, -24, -16, /* padding at -8 */
R(r16), R(r17), R(r18), R(r16), R(r17), R(r18),
R(r19), R(r20), R(r21), R(r22), R(r23), R(r24), R(r25), R(r26), R(r19), R(r20), R(r21), R(r22), R(r23), R(r24), R(r25), R(r26),
R(r27), R(r28), R(gp), R(r27), R(r28), R(gp),

View file

@ -78,8 +78,8 @@ __load_new_mm_context(struct mm_struct *next_mm)
/* Macro for exception fixup code to access integer registers. */ /* Macro for exception fixup code to access integer registers. */
#define dpf_reg(r) \ #define dpf_reg(r) \
(((unsigned long *)regs)[(r) <= 8 ? (r) : (r) <= 15 ? (r)-16 : \ (((unsigned long *)regs)[(r) <= 8 ? (r) : (r) <= 15 ? (r)-17 : \
(r) <= 18 ? (r)+10 : (r)-10]) (r) <= 18 ? (r)+11 : (r)-10])
asmlinkage void asmlinkage void
do_page_fault(unsigned long address, unsigned long mmcsr, do_page_fault(unsigned long address, unsigned long mmcsr,

View file

@ -225,7 +225,6 @@ config ARM64
select HAVE_FUNCTION_ERROR_INJECTION select HAVE_FUNCTION_ERROR_INJECTION
select HAVE_FUNCTION_GRAPH_FREGS select HAVE_FUNCTION_GRAPH_FREGS
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_RETVAL
select HAVE_GCC_PLUGINS select HAVE_GCC_PLUGINS
select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && \ select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && \
HW_PERF_EVENTS && HAVE_PERF_EVENTS_NMI HW_PERF_EVENTS && HAVE_PERF_EVENTS_NMI

View file

@ -48,7 +48,11 @@ KBUILD_CFLAGS += $(CC_FLAGS_NO_FPU) \
KBUILD_CFLAGS += $(call cc-disable-warning, psabi) KBUILD_CFLAGS += $(call cc-disable-warning, psabi)
KBUILD_AFLAGS += $(compat_vdso) KBUILD_AFLAGS += $(compat_vdso)
ifeq ($(call test-ge, $(CONFIG_RUSTC_VERSION), 108500),y)
KBUILD_RUSTFLAGS += --target=aarch64-unknown-none-softfloat
else
KBUILD_RUSTFLAGS += --target=aarch64-unknown-none -Ctarget-feature="-neon" KBUILD_RUSTFLAGS += --target=aarch64-unknown-none -Ctarget-feature="-neon"
endif
KBUILD_CFLAGS += $(call cc-option,-mabi=lp64) KBUILD_CFLAGS += $(call cc-option,-mabi=lp64)
KBUILD_AFLAGS += $(call cc-option,-mabi=lp64) KBUILD_AFLAGS += $(call cc-option,-mabi=lp64)

View file

@ -226,7 +226,6 @@
}; };
&uart5 { &uart5 {
pinctrl-0 = <&uart5_xfer>;
rts-gpios = <&gpio0 RK_PB5 GPIO_ACTIVE_HIGH>; rts-gpios = <&gpio0 RK_PB5 GPIO_ACTIVE_HIGH>;
status = "okay"; status = "okay";
}; };

View file

@ -396,6 +396,12 @@
status = "okay"; status = "okay";
}; };
&uart5 {
/delete-property/ dmas;
/delete-property/ dma-names;
pinctrl-0 = <&uart5_xfer>;
};
/* Mule UCAN */ /* Mule UCAN */
&usb_host0_ehci { &usb_host0_ehci {
status = "okay"; status = "okay";

View file

@ -17,8 +17,7 @@
&gmac2io { &gmac2io {
phy-handle = <&yt8531c>; phy-handle = <&yt8531c>;
tx_delay = <0x19>; phy-mode = "rgmii-id";
rx_delay = <0x05>;
status = "okay"; status = "okay";
mdio { mdio {

View file

@ -15,6 +15,7 @@
&gmac2io { &gmac2io {
phy-handle = <&rtl8211e>; phy-handle = <&rtl8211e>;
phy-mode = "rgmii";
tx_delay = <0x24>; tx_delay = <0x24>;
rx_delay = <0x18>; rx_delay = <0x18>;
status = "okay"; status = "okay";

View file

@ -109,7 +109,6 @@
assigned-clocks = <&cru SCLK_MAC2IO>, <&cru SCLK_MAC2IO_EXT>; assigned-clocks = <&cru SCLK_MAC2IO>, <&cru SCLK_MAC2IO_EXT>;
assigned-clock-parents = <&gmac_clk>, <&gmac_clk>; assigned-clock-parents = <&gmac_clk>, <&gmac_clk>;
clock_in_out = "input"; clock_in_out = "input";
phy-mode = "rgmii";
phy-supply = <&vcc_io>; phy-supply = <&vcc_io>;
pinctrl-0 = <&rgmiim1_pins>; pinctrl-0 = <&rgmiim1_pins>;
pinctrl-names = "default"; pinctrl-names = "default";

View file

@ -22,11 +22,11 @@
}; };
/* EC turns on w/ pp900_usb_en */ /* EC turns on w/ pp900_usb_en */
pp900_usb: pp900-ap { pp900_usb: regulator-pp900-ap {
}; };
/* EC turns on w/ pp900_pcie_en */ /* EC turns on w/ pp900_pcie_en */
pp900_pcie: pp900-ap { pp900_pcie: regulator-pp900-ap {
}; };
pp3000: regulator-pp3000 { pp3000: regulator-pp3000 {
@ -126,7 +126,7 @@
}; };
/* Always on; plain and simple */ /* Always on; plain and simple */
pp3000_ap: pp3000_emmc: pp3000 { pp3000_ap: pp3000_emmc: regulator-pp3000 {
}; };
pp1500_ap_io: regulator-pp1500-ap-io { pp1500_ap_io: regulator-pp1500-ap-io {
@ -160,7 +160,7 @@
}; };
/* EC turns on w/ pp3300_usb_en_l */ /* EC turns on w/ pp3300_usb_en_l */
pp3300_usb: pp3300 { pp3300_usb: regulator-pp3300 {
}; };
/* gpio is shared with pp1800_pcie and pinctrl is set there */ /* gpio is shared with pp1800_pcie and pinctrl is set there */

View file

@ -92,7 +92,7 @@
}; };
/* EC turns on pp1800_s3_en */ /* EC turns on pp1800_s3_en */
pp1800_s3: pp1800 { pp1800_s3: regulator-pp1800 {
}; };
/* pp3300 children, sorted by name */ /* pp3300 children, sorted by name */
@ -109,11 +109,11 @@
}; };
/* EC turns on pp3300_s0_en */ /* EC turns on pp3300_s0_en */
pp3300_s0: pp3300 { pp3300_s0: regulator-pp3300 {
}; };
/* EC turns on pp3300_s3_en */ /* EC turns on pp3300_s3_en */
pp3300_s3: pp3300 { pp3300_s3: regulator-pp3300 {
}; };
/* /*

View file

@ -189,39 +189,39 @@
}; };
/* EC turns on w/ pp900_ddrpll_en */ /* EC turns on w/ pp900_ddrpll_en */
pp900_ddrpll: pp900-ap { pp900_ddrpll: regulator-pp900-ap {
}; };
/* EC turns on w/ pp900_pll_en */ /* EC turns on w/ pp900_pll_en */
pp900_pll: pp900-ap { pp900_pll: regulator-pp900-ap {
}; };
/* EC turns on w/ pp900_pmu_en */ /* EC turns on w/ pp900_pmu_en */
pp900_pmu: pp900-ap { pp900_pmu: regulator-pp900-ap {
}; };
/* EC turns on w/ pp1800_s0_en_l */ /* EC turns on w/ pp1800_s0_en_l */
pp1800_ap_io: pp1800_emmc: pp1800_nfc: pp1800_s0: pp1800 { pp1800_ap_io: pp1800_emmc: pp1800_nfc: pp1800_s0: regulator-pp1800 {
}; };
/* EC turns on w/ pp1800_avdd_en_l */ /* EC turns on w/ pp1800_avdd_en_l */
pp1800_avdd: pp1800 { pp1800_avdd: regulator-pp1800 {
}; };
/* EC turns on w/ pp1800_lid_en_l */ /* EC turns on w/ pp1800_lid_en_l */
pp1800_lid: pp1800_mic: pp1800 { pp1800_lid: pp1800_mic: regulator-pp1800 {
}; };
/* EC turns on w/ lpddr_pwr_en */ /* EC turns on w/ lpddr_pwr_en */
pp1800_lpddr: pp1800 { pp1800_lpddr: regulator-pp1800 {
}; };
/* EC turns on w/ pp1800_pmu_en_l */ /* EC turns on w/ pp1800_pmu_en_l */
pp1800_pmu: pp1800 { pp1800_pmu: regulator-pp1800 {
}; };
/* EC turns on w/ pp1800_usb_en_l */ /* EC turns on w/ pp1800_usb_en_l */
pp1800_usb: pp1800 { pp1800_usb: regulator-pp1800 {
}; };
pp3000_sd_slot: regulator-pp3000-sd-slot { pp3000_sd_slot: regulator-pp3000-sd-slot {
@ -259,11 +259,11 @@
}; };
/* EC turns on w/ pp3300_trackpad_en_l */ /* EC turns on w/ pp3300_trackpad_en_l */
pp3300_trackpad: pp3300-trackpad { pp3300_trackpad: regulator-pp3300-trackpad {
}; };
/* EC turns on w/ usb_a_en */ /* EC turns on w/ usb_a_en */
pp5000_usb_a_vbus: pp5000 { pp5000_usb_a_vbus: regulator-pp5000 {
}; };
ap_rtc_clk: ap-rtc-clk { ap_rtc_clk: ap-rtc-clk {

View file

@ -549,10 +549,10 @@
mmu600_pcie: iommu@fc900000 { mmu600_pcie: iommu@fc900000 {
compatible = "arm,smmu-v3"; compatible = "arm,smmu-v3";
reg = <0x0 0xfc900000 0x0 0x200000>; reg = <0x0 0xfc900000 0x0 0x200000>;
interrupts = <GIC_SPI 369 IRQ_TYPE_LEVEL_HIGH 0>, interrupts = <GIC_SPI 369 IRQ_TYPE_EDGE_RISING 0>,
<GIC_SPI 371 IRQ_TYPE_LEVEL_HIGH 0>, <GIC_SPI 371 IRQ_TYPE_EDGE_RISING 0>,
<GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH 0>, <GIC_SPI 374 IRQ_TYPE_EDGE_RISING 0>,
<GIC_SPI 367 IRQ_TYPE_LEVEL_HIGH 0>; <GIC_SPI 367 IRQ_TYPE_EDGE_RISING 0>;
interrupt-names = "eventq", "gerror", "priq", "cmdq-sync"; interrupt-names = "eventq", "gerror", "priq", "cmdq-sync";
#iommu-cells = <1>; #iommu-cells = <1>;
}; };
@ -560,10 +560,10 @@
mmu600_php: iommu@fcb00000 { mmu600_php: iommu@fcb00000 {
compatible = "arm,smmu-v3"; compatible = "arm,smmu-v3";
reg = <0x0 0xfcb00000 0x0 0x200000>; reg = <0x0 0xfcb00000 0x0 0x200000>;
interrupts = <GIC_SPI 381 IRQ_TYPE_LEVEL_HIGH 0>, interrupts = <GIC_SPI 381 IRQ_TYPE_EDGE_RISING 0>,
<GIC_SPI 383 IRQ_TYPE_LEVEL_HIGH 0>, <GIC_SPI 383 IRQ_TYPE_EDGE_RISING 0>,
<GIC_SPI 386 IRQ_TYPE_LEVEL_HIGH 0>, <GIC_SPI 386 IRQ_TYPE_EDGE_RISING 0>,
<GIC_SPI 379 IRQ_TYPE_LEVEL_HIGH 0>; <GIC_SPI 379 IRQ_TYPE_EDGE_RISING 0>;
interrupt-names = "eventq", "gerror", "priq", "cmdq-sync"; interrupt-names = "eventq", "gerror", "priq", "cmdq-sync";
#iommu-cells = <1>; #iommu-cells = <1>;
status = "disabled"; status = "disabled";
@ -2668,9 +2668,9 @@
rockchip,hw-tshut-temp = <120000>; rockchip,hw-tshut-temp = <120000>;
rockchip,hw-tshut-mode = <0>; /* tshut mode 0:CRU 1:GPIO */ rockchip,hw-tshut-mode = <0>; /* tshut mode 0:CRU 1:GPIO */
rockchip,hw-tshut-polarity = <0>; /* tshut polarity 0:LOW 1:HIGH */ rockchip,hw-tshut-polarity = <0>; /* tshut polarity 0:LOW 1:HIGH */
pinctrl-0 = <&tsadc_gpio_func>; pinctrl-0 = <&tsadc_shut_org>;
pinctrl-1 = <&tsadc_shut>; pinctrl-1 = <&tsadc_gpio_func>;
pinctrl-names = "gpio", "otpout"; pinctrl-names = "default", "sleep";
#thermal-sensor-cells = <1>; #thermal-sensor-cells = <1>;
status = "disabled"; status = "disabled";
}; };

View file

@ -113,7 +113,7 @@
compatible = "regulator-fixed"; compatible = "regulator-fixed";
regulator-name = "vcc3v3_lcd"; regulator-name = "vcc3v3_lcd";
enable-active-high; enable-active-high;
gpio = <&gpio1 RK_PC4 GPIO_ACTIVE_HIGH>; gpio = <&gpio0 RK_PC4 GPIO_ACTIVE_HIGH>;
pinctrl-names = "default"; pinctrl-names = "default";
pinctrl-0 = <&lcdpwr_en>; pinctrl-0 = <&lcdpwr_en>;
vin-supply = <&vcc3v3_sys>; vin-supply = <&vcc3v3_sys>;
@ -241,7 +241,7 @@
&pinctrl { &pinctrl {
lcd { lcd {
lcdpwr_en: lcdpwr-en { lcdpwr_en: lcdpwr-en {
rockchip,pins = <1 RK_PC4 RK_FUNC_GPIO &pcfg_pull_down>; rockchip,pins = <0 RK_PC4 RK_FUNC_GPIO &pcfg_pull_down>;
}; };
bl_en: bl-en { bl_en: bl-en {

View file

@ -213,7 +213,6 @@
interrupt-names = "sys", "pmc", "msg", "legacy", "err", interrupt-names = "sys", "pmc", "msg", "legacy", "err",
"dma0", "dma1", "dma2", "dma3"; "dma0", "dma1", "dma2", "dma3";
max-link-speed = <3>; max-link-speed = <3>;
iommus = <&mmu600_pcie 0x0000>;
num-lanes = <4>; num-lanes = <4>;
phys = <&pcie30phy>; phys = <&pcie30phy>;
phy-names = "pcie-phy"; phy-names = "pcie-phy";

View file

@ -23,3 +23,7 @@
vpcie3v3-supply = <&vcc3v3_pcie30>; vpcie3v3-supply = <&vcc3v3_pcie30>;
status = "okay"; status = "okay";
}; };
&mmu600_pcie {
status = "disabled";
};

View file

@ -1551,6 +1551,8 @@ CONFIG_PWM_VISCONTI=m
CONFIG_SL28CPLD_INTC=y CONFIG_SL28CPLD_INTC=y
CONFIG_QCOM_PDC=y CONFIG_QCOM_PDC=y
CONFIG_QCOM_MPM=y CONFIG_QCOM_MPM=y
CONFIG_TI_SCI_INTR_IRQCHIP=y
CONFIG_TI_SCI_INTA_IRQCHIP=y
CONFIG_RESET_GPIO=m CONFIG_RESET_GPIO=m
CONFIG_RESET_IMX7=y CONFIG_RESET_IMX7=y
CONFIG_RESET_QCOM_AOSS=y CONFIG_RESET_QCOM_AOSS=y

View file

@ -605,48 +605,6 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
__cpacr_to_cptr_set(clr, set));\ __cpacr_to_cptr_set(clr, set));\
} while (0) } while (0)
static __always_inline void kvm_write_cptr_el2(u64 val)
{
if (has_vhe() || has_hvhe())
write_sysreg(val, cpacr_el1);
else
write_sysreg(val, cptr_el2);
}
/* Resets the value of cptr_el2 when returning to the host. */
static __always_inline void __kvm_reset_cptr_el2(struct kvm *kvm)
{
u64 val;
if (has_vhe()) {
val = (CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN);
if (cpus_have_final_cap(ARM64_SME))
val |= CPACR_EL1_SMEN_EL1EN;
} else if (has_hvhe()) {
val = CPACR_EL1_FPEN;
if (!kvm_has_sve(kvm) || !guest_owns_fp_regs())
val |= CPACR_EL1_ZEN;
if (cpus_have_final_cap(ARM64_SME))
val |= CPACR_EL1_SMEN;
} else {
val = CPTR_NVHE_EL2_RES1;
if (kvm_has_sve(kvm) && guest_owns_fp_regs())
val |= CPTR_EL2_TZ;
if (!cpus_have_final_cap(ARM64_SME))
val |= CPTR_EL2_TSM;
}
kvm_write_cptr_el2(val);
}
#ifdef __KVM_NVHE_HYPERVISOR__
#define kvm_reset_cptr_el2(v) __kvm_reset_cptr_el2(kern_hyp_va((v)->kvm))
#else
#define kvm_reset_cptr_el2(v) __kvm_reset_cptr_el2((v)->kvm)
#endif
/* /*
* Returns a 'sanitised' view of CPTR_EL2, translating from nVHE to the VHE * Returns a 'sanitised' view of CPTR_EL2, translating from nVHE to the VHE
* format if E2H isn't set. * format if E2H isn't set.

View file

@ -100,7 +100,7 @@ static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc,
static inline void *pop_hyp_memcache(struct kvm_hyp_memcache *mc, static inline void *pop_hyp_memcache(struct kvm_hyp_memcache *mc,
void *(*to_va)(phys_addr_t phys)) void *(*to_va)(phys_addr_t phys))
{ {
phys_addr_t *p = to_va(mc->head); phys_addr_t *p = to_va(mc->head & PAGE_MASK);
if (!mc->nr_pages) if (!mc->nr_pages)
return NULL; return NULL;
@ -615,8 +615,6 @@ struct cpu_sve_state {
struct kvm_host_data { struct kvm_host_data {
#define KVM_HOST_DATA_FLAG_HAS_SPE 0 #define KVM_HOST_DATA_FLAG_HAS_SPE 0
#define KVM_HOST_DATA_FLAG_HAS_TRBE 1 #define KVM_HOST_DATA_FLAG_HAS_TRBE 1
#define KVM_HOST_DATA_FLAG_HOST_SVE_ENABLED 2
#define KVM_HOST_DATA_FLAG_HOST_SME_ENABLED 3
#define KVM_HOST_DATA_FLAG_TRBE_ENABLED 4 #define KVM_HOST_DATA_FLAG_TRBE_ENABLED 4
#define KVM_HOST_DATA_FLAG_EL1_TRACING_CONFIGURED 5 #define KVM_HOST_DATA_FLAG_EL1_TRACING_CONFIGURED 5
unsigned long flags; unsigned long flags;
@ -624,23 +622,13 @@ struct kvm_host_data {
struct kvm_cpu_context host_ctxt; struct kvm_cpu_context host_ctxt;
/* /*
* All pointers in this union are hyp VA. * Hyp VA.
* sve_state is only used in pKVM and if system_supports_sve(). * sve_state is only used in pKVM and if system_supports_sve().
*/ */
union { struct cpu_sve_state *sve_state;
struct user_fpsimd_state *fpsimd_state;
struct cpu_sve_state *sve_state;
};
union { /* Used by pKVM only. */
/* HYP VA pointer to the host storage for FPMR */ u64 fpmr;
u64 *fpmr_ptr;
/*
* Used by pKVM only, as it needs to provide storage
* for the host
*/
u64 fpmr;
};
/* Ownership of the FP regs */ /* Ownership of the FP regs */
enum { enum {

View file

@ -101,16 +101,18 @@ int populate_cache_leaves(unsigned int cpu)
unsigned int level, idx; unsigned int level, idx;
enum cache_type type; enum cache_type type;
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
struct cacheinfo *this_leaf = this_cpu_ci->info_list; struct cacheinfo *infos = this_cpu_ci->info_list;
for (idx = 0, level = 1; level <= this_cpu_ci->num_levels && for (idx = 0, level = 1; level <= this_cpu_ci->num_levels &&
idx < this_cpu_ci->num_leaves; idx++, level++) { idx < this_cpu_ci->num_leaves; level++) {
type = get_cache_type(level); type = get_cache_type(level);
if (type == CACHE_TYPE_SEPARATE) { if (type == CACHE_TYPE_SEPARATE) {
ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level); if (idx + 1 >= this_cpu_ci->num_leaves)
ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level); break;
ci_leaf_init(&infos[idx++], CACHE_TYPE_DATA, level);
ci_leaf_init(&infos[idx++], CACHE_TYPE_INST, level);
} else { } else {
ci_leaf_init(this_leaf++, type, level); ci_leaf_init(&infos[idx++], type, level);
} }
} }
return 0; return 0;

View file

@ -3091,6 +3091,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(ID_AA64ISAR0_EL1, TS, FLAGM, CAP_HWCAP, KERNEL_HWCAP_FLAGM), HWCAP_CAP(ID_AA64ISAR0_EL1, TS, FLAGM, CAP_HWCAP, KERNEL_HWCAP_FLAGM),
HWCAP_CAP(ID_AA64ISAR0_EL1, TS, FLAGM2, CAP_HWCAP, KERNEL_HWCAP_FLAGM2), HWCAP_CAP(ID_AA64ISAR0_EL1, TS, FLAGM2, CAP_HWCAP, KERNEL_HWCAP_FLAGM2),
HWCAP_CAP(ID_AA64ISAR0_EL1, RNDR, IMP, CAP_HWCAP, KERNEL_HWCAP_RNG), HWCAP_CAP(ID_AA64ISAR0_EL1, RNDR, IMP, CAP_HWCAP, KERNEL_HWCAP_RNG),
HWCAP_CAP(ID_AA64ISAR3_EL1, FPRCVT, IMP, CAP_HWCAP, KERNEL_HWCAP_FPRCVT),
HWCAP_CAP(ID_AA64PFR0_EL1, FP, IMP, CAP_HWCAP, KERNEL_HWCAP_FP), HWCAP_CAP(ID_AA64PFR0_EL1, FP, IMP, CAP_HWCAP, KERNEL_HWCAP_FP),
HWCAP_CAP(ID_AA64PFR0_EL1, FP, FP16, CAP_HWCAP, KERNEL_HWCAP_FPHP), HWCAP_CAP(ID_AA64PFR0_EL1, FP, FP16, CAP_HWCAP, KERNEL_HWCAP_FPHP),
HWCAP_CAP(ID_AA64PFR0_EL1, AdvSIMD, IMP, CAP_HWCAP, KERNEL_HWCAP_ASIMD), HWCAP_CAP(ID_AA64PFR0_EL1, AdvSIMD, IMP, CAP_HWCAP, KERNEL_HWCAP_ASIMD),
@ -3180,8 +3181,6 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(ID_AA64SMFR0_EL1, SF8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8FMA), HWCAP_CAP(ID_AA64SMFR0_EL1, SF8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8FMA),
HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP4), HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP4),
HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP2), HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP2),
HWCAP_CAP(ID_AA64SMFR0_EL1, SF8MM8, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8MM8),
HWCAP_CAP(ID_AA64SMFR0_EL1, SF8MM4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8MM4),
HWCAP_CAP(ID_AA64SMFR0_EL1, SBitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SBITPERM), HWCAP_CAP(ID_AA64SMFR0_EL1, SBitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SBITPERM),
HWCAP_CAP(ID_AA64SMFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_AES), HWCAP_CAP(ID_AA64SMFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_AES),
HWCAP_CAP(ID_AA64SMFR0_EL1, SFEXPA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SFEXPA), HWCAP_CAP(ID_AA64SMFR0_EL1, SFEXPA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SFEXPA),
@ -3192,6 +3191,8 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(ID_AA64FPFR0_EL1, F8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_F8FMA), HWCAP_CAP(ID_AA64FPFR0_EL1, F8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_F8FMA),
HWCAP_CAP(ID_AA64FPFR0_EL1, F8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_F8DP4), HWCAP_CAP(ID_AA64FPFR0_EL1, F8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_F8DP4),
HWCAP_CAP(ID_AA64FPFR0_EL1, F8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_F8DP2), HWCAP_CAP(ID_AA64FPFR0_EL1, F8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_F8DP2),
HWCAP_CAP(ID_AA64FPFR0_EL1, F8MM8, IMP, CAP_HWCAP, KERNEL_HWCAP_F8MM8),
HWCAP_CAP(ID_AA64FPFR0_EL1, F8MM4, IMP, CAP_HWCAP, KERNEL_HWCAP_F8MM4),
HWCAP_CAP(ID_AA64FPFR0_EL1, F8E4M3, IMP, CAP_HWCAP, KERNEL_HWCAP_F8E4M3), HWCAP_CAP(ID_AA64FPFR0_EL1, F8E4M3, IMP, CAP_HWCAP, KERNEL_HWCAP_F8E4M3),
HWCAP_CAP(ID_AA64FPFR0_EL1, F8E5M2, IMP, CAP_HWCAP, KERNEL_HWCAP_F8E5M2), HWCAP_CAP(ID_AA64FPFR0_EL1, F8E5M2, IMP, CAP_HWCAP, KERNEL_HWCAP_F8E5M2),
#ifdef CONFIG_ARM64_POE #ifdef CONFIG_ARM64_POE

View file

@ -1694,31 +1694,6 @@ void fpsimd_signal_preserve_current_state(void)
sve_to_fpsimd(current); sve_to_fpsimd(current);
} }
/*
* Called by KVM when entering the guest.
*/
void fpsimd_kvm_prepare(void)
{
if (!system_supports_sve())
return;
/*
* KVM does not save host SVE state since we can only enter
* the guest from a syscall so the ABI means that only the
* non-saved SVE state needs to be saved. If we have left
* SVE enabled for performance reasons then update the task
* state to be FPSIMD only.
*/
get_cpu_fpsimd_context();
if (test_and_clear_thread_flag(TIF_SVE)) {
sve_to_fpsimd(current);
current->thread.fp_type = FP_STATE_FPSIMD;
}
put_cpu_fpsimd_context();
}
/* /*
* Associate current's FPSIMD context with this cpu * Associate current's FPSIMD context with this cpu
* The caller must have ownership of the cpu FPSIMD context before calling * The caller must have ownership of the cpu FPSIMD context before calling

View file

@ -194,12 +194,19 @@ static void amu_fie_setup(const struct cpumask *cpus)
int cpu; int cpu;
/* We are already set since the last insmod of cpufreq driver */ /* We are already set since the last insmod of cpufreq driver */
if (unlikely(cpumask_subset(cpus, amu_fie_cpus))) if (cpumask_available(amu_fie_cpus) &&
unlikely(cpumask_subset(cpus, amu_fie_cpus)))
return; return;
for_each_cpu(cpu, cpus) { for_each_cpu(cpu, cpus)
if (!freq_counters_valid(cpu)) if (!freq_counters_valid(cpu))
return; return;
if (!cpumask_available(amu_fie_cpus) &&
!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL)) {
WARN_ONCE(1, "Failed to allocate FIE cpumask for CPUs[%*pbl]\n",
cpumask_pr_args(cpus));
return;
} }
cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus); cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus);
@ -237,17 +244,8 @@ static struct notifier_block init_amu_fie_notifier = {
static int __init init_amu_fie(void) static int __init init_amu_fie(void)
{ {
int ret; return cpufreq_register_notifier(&init_amu_fie_notifier,
if (!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL))
return -ENOMEM;
ret = cpufreq_register_notifier(&init_amu_fie_notifier,
CPUFREQ_POLICY_NOTIFIER); CPUFREQ_POLICY_NOTIFIER);
if (ret)
free_cpumask_var(amu_fie_cpus);
return ret;
} }
core_initcall(init_amu_fie); core_initcall(init_amu_fie);

View file

@ -41,6 +41,7 @@ SECTIONS
*/ */
/DISCARD/ : { /DISCARD/ : {
*(.note.GNU-stack .note.gnu.property) *(.note.GNU-stack .note.gnu.property)
*(.ARM.attributes)
} }
.note : { *(.note.*) } :text :note .note : { *(.note.*) } :text :note

View file

@ -162,6 +162,7 @@ SECTIONS
/DISCARD/ : { /DISCARD/ : {
*(.interp .dynamic) *(.interp .dynamic)
*(.dynsym .dynstr .hash .gnu.hash) *(.dynsym .dynstr .hash .gnu.hash)
*(.ARM.attributes)
} }
. = KIMAGE_VADDR; . = KIMAGE_VADDR;

View file

@ -447,21 +447,19 @@ static void kvm_timer_update_status(struct arch_timer_context *ctx, bool level)
static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level, static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
struct arch_timer_context *timer_ctx) struct arch_timer_context *timer_ctx)
{ {
int ret;
kvm_timer_update_status(timer_ctx, new_level); kvm_timer_update_status(timer_ctx, new_level);
timer_ctx->irq.level = new_level; timer_ctx->irq.level = new_level;
trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_irq(timer_ctx), trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_irq(timer_ctx),
timer_ctx->irq.level); timer_ctx->irq.level);
if (!userspace_irqchip(vcpu->kvm)) { if (userspace_irqchip(vcpu->kvm))
ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu, return;
timer_irq(timer_ctx),
timer_ctx->irq.level, kvm_vgic_inject_irq(vcpu->kvm, vcpu,
timer_ctx); timer_irq(timer_ctx),
WARN_ON(ret); timer_ctx->irq.level,
} timer_ctx);
} }
/* Only called for a fully emulated timer */ /* Only called for a fully emulated timer */
@ -471,10 +469,8 @@ static void timer_emulate(struct arch_timer_context *ctx)
trace_kvm_timer_emulate(ctx, should_fire); trace_kvm_timer_emulate(ctx, should_fire);
if (should_fire != ctx->irq.level) { if (should_fire != ctx->irq.level)
kvm_timer_update_irq(ctx->vcpu, should_fire, ctx); kvm_timer_update_irq(ctx->vcpu, should_fire, ctx);
return;
}
kvm_timer_update_status(ctx, should_fire); kvm_timer_update_status(ctx, should_fire);
@ -761,21 +757,6 @@ static void kvm_timer_vcpu_load_nested_switch(struct kvm_vcpu *vcpu,
timer_irq(map->direct_ptimer), timer_irq(map->direct_ptimer),
&arch_timer_irq_ops); &arch_timer_irq_ops);
WARN_ON_ONCE(ret); WARN_ON_ONCE(ret);
/*
* The virtual offset behaviour is "interesting", as it
* always applies when HCR_EL2.E2H==0, but only when
* accessed from EL1 when HCR_EL2.E2H==1. So make sure we
* track E2H when putting the HV timer in "direct" mode.
*/
if (map->direct_vtimer == vcpu_hvtimer(vcpu)) {
struct arch_timer_offset *offs = &map->direct_vtimer->offset;
if (vcpu_el2_e2h_is_set(vcpu))
offs->vcpu_offset = NULL;
else
offs->vcpu_offset = &__vcpu_sys_reg(vcpu, CNTVOFF_EL2);
}
} }
} }
@ -976,31 +957,21 @@ void kvm_timer_sync_nested(struct kvm_vcpu *vcpu)
* which allows trapping of the timer registers even with NV2. * which allows trapping of the timer registers even with NV2.
* Still, this is still worse than FEAT_NV on its own. Meh. * Still, this is still worse than FEAT_NV on its own. Meh.
*/ */
if (!vcpu_el2_e2h_is_set(vcpu)) { if (!cpus_have_final_cap(ARM64_HAS_ECV)) {
if (cpus_have_final_cap(ARM64_HAS_ECV))
return;
/*
* A non-VHE guest hypervisor doesn't have any direct access
* to its timers: the EL2 registers trap (and the HW is
* fully emulated), while the EL0 registers access memory
* despite the access being notionally direct. Boo.
*
* We update the hardware timer registers with the
* latest value written by the guest to the VNCR page
* and let the hardware take care of the rest.
*/
write_sysreg_el0(__vcpu_sys_reg(vcpu, CNTV_CTL_EL0), SYS_CNTV_CTL);
write_sysreg_el0(__vcpu_sys_reg(vcpu, CNTV_CVAL_EL0), SYS_CNTV_CVAL);
write_sysreg_el0(__vcpu_sys_reg(vcpu, CNTP_CTL_EL0), SYS_CNTP_CTL);
write_sysreg_el0(__vcpu_sys_reg(vcpu, CNTP_CVAL_EL0), SYS_CNTP_CVAL);
} else {
/* /*
* For a VHE guest hypervisor, the EL2 state is directly * For a VHE guest hypervisor, the EL2 state is directly
* stored in the host EL1 timers, while the emulated EL0 * stored in the host EL1 timers, while the emulated EL1
* state is stored in the VNCR page. The latter could have * state is stored in the VNCR page. The latter could have
* been updated behind our back, and we must reset the * been updated behind our back, and we must reset the
* emulation of the timers. * emulation of the timers.
*
* A non-VHE guest hypervisor doesn't have any direct access
* to its timers: the EL2 registers trap despite being
* notionally direct (we use the EL1 HW, as for VHE), while
* the EL1 registers access memory.
*
* In both cases, process the emulated timers on each guest
* exit. Boo.
*/ */
struct timer_map map; struct timer_map map;
get_timer_map(vcpu, &map); get_timer_map(vcpu, &map);

View file

@ -2290,6 +2290,19 @@ static int __init init_subsystems(void)
break; break;
case -ENODEV: case -ENODEV:
case -ENXIO: case -ENXIO:
/*
* No VGIC? No pKVM for you.
*
* Protected mode assumes that VGICv3 is present, so no point
* in trying to hobble along if vgic initialization fails.
*/
if (is_protected_kvm_enabled())
goto out;
/*
* Otherwise, userspace could choose to implement a GIC for its
* guest on non-cooperative hardware.
*/
vgic_present = false; vgic_present = false;
err = 0; err = 0;
break; break;
@ -2400,6 +2413,13 @@ static void kvm_hyp_init_symbols(void)
kvm_nvhe_sym(id_aa64smfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64SMFR0_EL1); kvm_nvhe_sym(id_aa64smfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64SMFR0_EL1);
kvm_nvhe_sym(__icache_flags) = __icache_flags; kvm_nvhe_sym(__icache_flags) = __icache_flags;
kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits; kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits;
/*
* Flush entire BSS since part of its data containing init symbols is read
* while the MMU is off.
*/
kvm_flush_dcache_to_poc(kvm_ksym_ref(__hyp_bss_start),
kvm_ksym_ref(__hyp_bss_end) - kvm_ksym_ref(__hyp_bss_start));
} }
static int __init kvm_hyp_init_protection(u32 hyp_va_bits) static int __init kvm_hyp_init_protection(u32 hyp_va_bits)
@ -2461,14 +2481,6 @@ static void finalize_init_hyp_mode(void)
per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state =
kern_hyp_va(sve_state); kern_hyp_va(sve_state);
} }
} else {
for_each_possible_cpu(cpu) {
struct user_fpsimd_state *fpsimd_state;
fpsimd_state = &per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->host_ctxt.fp_regs;
per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->fpsimd_state =
kern_hyp_va(fpsimd_state);
}
} }
} }

View file

@ -54,50 +54,18 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
if (!system_supports_fpsimd()) if (!system_supports_fpsimd())
return; return;
fpsimd_kvm_prepare();
/* /*
* We will check TIF_FOREIGN_FPSTATE just before entering the * Ensure that any host FPSIMD/SVE/SME state is saved and unbound such
* guest in kvm_arch_vcpu_ctxflush_fp() and override this to * that the host kernel is responsible for restoring this state upon
* FP_STATE_FREE if the flag set. * return to userspace, and the hyp code doesn't need to save anything.
*
* When the host may use SME, fpsimd_save_and_flush_cpu_state() ensures
* that PSTATE.{SM,ZA} == {0,0}.
*/ */
*host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED; fpsimd_save_and_flush_cpu_state();
*host_data_ptr(fpsimd_state) = kern_hyp_va(&current->thread.uw.fpsimd_state); *host_data_ptr(fp_owner) = FP_STATE_FREE;
*host_data_ptr(fpmr_ptr) = kern_hyp_va(&current->thread.uw.fpmr);
host_data_clear_flag(HOST_SVE_ENABLED); WARN_ON_ONCE(system_supports_sme() && read_sysreg_s(SYS_SVCR));
if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
host_data_set_flag(HOST_SVE_ENABLED);
if (system_supports_sme()) {
host_data_clear_flag(HOST_SME_ENABLED);
if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
host_data_set_flag(HOST_SME_ENABLED);
/*
* If PSTATE.SM is enabled then save any pending FP
* state and disable PSTATE.SM. If we leave PSTATE.SM
* enabled and the guest does not enable SME via
* CPACR_EL1.SMEN then operations that should be valid
* may generate SME traps from EL1 to EL1 which we
* can't intercept and which would confuse the guest.
*
* Do the same for PSTATE.ZA in the case where there
* is state in the registers which has not already
* been saved, this is very unlikely to happen.
*/
if (read_sysreg_s(SYS_SVCR) & (SVCR_SM_MASK | SVCR_ZA_MASK)) {
*host_data_ptr(fp_owner) = FP_STATE_FREE;
fpsimd_save_and_flush_cpu_state();
}
}
/*
* If normal guests gain SME support, maintain this behavior for pKVM
* guests, which don't support SME.
*/
WARN_ON(is_protected_kvm_enabled() && system_supports_sme() &&
read_sysreg_s(SYS_SVCR));
} }
/* /*
@ -162,52 +130,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
local_irq_save(flags); local_irq_save(flags);
/*
* If we have VHE then the Hyp code will reset CPACR_EL1 to
* the default value and we need to reenable SME.
*/
if (has_vhe() && system_supports_sme()) {
/* Also restore EL0 state seen on entry */
if (host_data_test_flag(HOST_SME_ENABLED))
sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_SMEN);
else
sysreg_clear_set(CPACR_EL1,
CPACR_EL1_SMEN_EL0EN,
CPACR_EL1_SMEN_EL1EN);
isb();
}
if (guest_owns_fp_regs()) { if (guest_owns_fp_regs()) {
if (vcpu_has_sve(vcpu)) {
u64 zcr = read_sysreg_el1(SYS_ZCR);
/*
* If the vCPU is in the hyp context then ZCR_EL1 is
* loaded with its vEL2 counterpart.
*/
__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)) = zcr;
/*
* Restore the VL that was saved when bound to the CPU,
* which is the maximum VL for the guest. Because the
* layout of the data when saving the sve state depends
* on the VL, we need to use a consistent (i.e., the
* maximum) VL.
* Note that this means that at guest exit ZCR_EL1 is
* not necessarily the same as on guest entry.
*
* ZCR_EL2 holds the guest hypervisor's VL when running
* a nested guest, which could be smaller than the
* max for the vCPU. Similar to above, we first need to
* switch to a VL consistent with the layout of the
* vCPU's SVE state. KVM support for NV implies VHE, so
* using the ZCR_EL1 alias is safe.
*/
if (!has_vhe() || (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)))
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1,
SYS_ZCR_EL1);
}
/* /*
* Flush (save and invalidate) the fpsimd/sve state so that if * Flush (save and invalidate) the fpsimd/sve state so that if
* the host tries to use fpsimd/sve, it's not using stale data * the host tries to use fpsimd/sve, it's not using stale data
@ -219,18 +142,6 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
* when needed. * when needed.
*/ */
fpsimd_save_and_flush_cpu_state(); fpsimd_save_and_flush_cpu_state();
} else if (has_vhe() && system_supports_sve()) {
/*
* The FPSIMD/SVE state in the CPU has not been touched, and we
* have SVE (and VHE): CPACR_EL1 (alias CPTR_EL2) has been
* reset by kvm_reset_cptr_el2() in the Hyp code, disabling SVE
* for EL0. To avoid spurious traps, restore the trap state
* seen by kvm_arch_vcpu_load_fp():
*/
if (host_data_test_flag(HOST_SVE_ENABLED))
sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN);
else
sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0);
} }
local_irq_restore(flags); local_irq_restore(flags);

View file

@ -44,6 +44,11 @@ alternative_if ARM64_HAS_RAS_EXTN
alternative_else_nop_endif alternative_else_nop_endif
mrs x1, isr_el1 mrs x1, isr_el1
cbz x1, 1f cbz x1, 1f
// Ensure that __guest_enter() always provides a context
// synchronization event so that callers don't need ISBs for anything
// that would usually be synchonized by the ERET.
isb
mov x0, #ARM_EXCEPTION_IRQ mov x0, #ARM_EXCEPTION_IRQ
ret ret

View file

@ -326,7 +326,7 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
return __get_fault_info(vcpu->arch.fault.esr_el2, &vcpu->arch.fault); return __get_fault_info(vcpu->arch.fault.esr_el2, &vcpu->arch.fault);
} }
static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code) static inline bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)
{ {
*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
arm64_mops_reset_regs(vcpu_gp_regs(vcpu), vcpu->arch.fault.esr_el2); arm64_mops_reset_regs(vcpu_gp_regs(vcpu), vcpu->arch.fault.esr_el2);
@ -375,7 +375,87 @@ static inline void __hyp_sve_save_host(void)
true); true);
} }
static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu); static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu)
{
u64 zcr_el1, zcr_el2;
if (!guest_owns_fp_regs())
return;
if (vcpu_has_sve(vcpu)) {
/* A guest hypervisor may restrict the effective max VL. */
if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))
zcr_el2 = __vcpu_sys_reg(vcpu, ZCR_EL2);
else
zcr_el2 = vcpu_sve_max_vq(vcpu) - 1;
write_sysreg_el2(zcr_el2, SYS_ZCR);
zcr_el1 = __vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu));
write_sysreg_el1(zcr_el1, SYS_ZCR);
}
}
static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu)
{
u64 zcr_el1, zcr_el2;
if (!guest_owns_fp_regs())
return;
/*
* When the guest owns the FP regs, we know that guest+hyp traps for
* any FPSIMD/SVE/SME features exposed to the guest have been disabled
* by either fpsimd_lazy_switch_to_guest() or kvm_hyp_handle_fpsimd()
* prior to __guest_entry(). As __guest_entry() guarantees a context
* synchronization event, we don't need an ISB here to avoid taking
* traps for anything that was exposed to the guest.
*/
if (vcpu_has_sve(vcpu)) {
zcr_el1 = read_sysreg_el1(SYS_ZCR);
__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)) = zcr_el1;
/*
* The guest's state is always saved using the guest's max VL.
* Ensure that the host has the guest's max VL active such that
* the host can save the guest's state lazily, but don't
* artificially restrict the host to the guest's max VL.
*/
if (has_vhe()) {
zcr_el2 = vcpu_sve_max_vq(vcpu) - 1;
write_sysreg_el2(zcr_el2, SYS_ZCR);
} else {
zcr_el2 = sve_vq_from_vl(kvm_host_sve_max_vl) - 1;
write_sysreg_el2(zcr_el2, SYS_ZCR);
zcr_el1 = vcpu_sve_max_vq(vcpu) - 1;
write_sysreg_el1(zcr_el1, SYS_ZCR);
}
}
}
static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
{
/*
* Non-protected kvm relies on the host restoring its sve state.
* Protected kvm restores the host's sve state as not to reveal that
* fpsimd was used by a guest nor leak upper sve bits.
*/
if (system_supports_sve()) {
__hyp_sve_save_host();
/* Re-enable SVE traps if not supported for the guest vcpu. */
if (!vcpu_has_sve(vcpu))
cpacr_clear_set(CPACR_EL1_ZEN, 0);
} else {
__fpsimd_save_state(host_data_ptr(host_ctxt.fp_regs));
}
if (kvm_has_fpmr(kern_hyp_va(vcpu->kvm)))
*host_data_ptr(fpmr) = read_sysreg_s(SYS_FPMR);
}
/* /*
* We trap the first access to the FP/SIMD to save the host context and * We trap the first access to the FP/SIMD to save the host context and
@ -383,7 +463,7 @@ static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu);
* If FP/SIMD is not implemented, handle the trap and inject an undefined * If FP/SIMD is not implemented, handle the trap and inject an undefined
* instruction exception to the guest. Similarly for trapped SVE accesses. * instruction exception to the guest. Similarly for trapped SVE accesses.
*/ */
static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
{ {
bool sve_guest; bool sve_guest;
u8 esr_ec; u8 esr_ec;
@ -425,7 +505,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
isb(); isb();
/* Write out the host state if it's in the registers */ /* Write out the host state if it's in the registers */
if (host_owns_fp_regs()) if (is_protected_kvm_enabled() && host_owns_fp_regs())
kvm_hyp_save_fpsimd_host(vcpu); kvm_hyp_save_fpsimd_host(vcpu);
/* Restore the guest state */ /* Restore the guest state */
@ -501,9 +581,22 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu)
return true; return true;
} }
/* Open-coded version of timer_get_offset() to allow for kern_hyp_va() */
static inline u64 hyp_timer_get_offset(struct arch_timer_context *ctxt)
{
u64 offset = 0;
if (ctxt->offset.vm_offset)
offset += *kern_hyp_va(ctxt->offset.vm_offset);
if (ctxt->offset.vcpu_offset)
offset += *kern_hyp_va(ctxt->offset.vcpu_offset);
return offset;
}
static inline u64 compute_counter_value(struct arch_timer_context *ctxt) static inline u64 compute_counter_value(struct arch_timer_context *ctxt)
{ {
return arch_timer_read_cntpct_el0() - timer_get_offset(ctxt); return arch_timer_read_cntpct_el0() - hyp_timer_get_offset(ctxt);
} }
static bool kvm_handle_cntxct(struct kvm_vcpu *vcpu) static bool kvm_handle_cntxct(struct kvm_vcpu *vcpu)
@ -587,7 +680,7 @@ static bool handle_ampere1_tcr(struct kvm_vcpu *vcpu)
return true; return true;
} }
static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code) static inline bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
{ {
if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
handle_tx2_tvm(vcpu)) handle_tx2_tvm(vcpu))
@ -607,7 +700,7 @@ static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
return false; return false;
} }
static bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code) static inline bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code)
{ {
if (static_branch_unlikely(&vgic_v3_cpuif_trap) && if (static_branch_unlikely(&vgic_v3_cpuif_trap) &&
__vgic_v3_perform_cpuif_access(vcpu) == 1) __vgic_v3_perform_cpuif_access(vcpu) == 1)
@ -616,19 +709,18 @@ static bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code)
return false; return false;
} }
static bool kvm_hyp_handle_memory_fault(struct kvm_vcpu *vcpu, u64 *exit_code) static inline bool kvm_hyp_handle_memory_fault(struct kvm_vcpu *vcpu,
u64 *exit_code)
{ {
if (!__populate_fault_info(vcpu)) if (!__populate_fault_info(vcpu))
return true; return true;
return false; return false;
} }
static bool kvm_hyp_handle_iabt_low(struct kvm_vcpu *vcpu, u64 *exit_code) #define kvm_hyp_handle_iabt_low kvm_hyp_handle_memory_fault
__alias(kvm_hyp_handle_memory_fault); #define kvm_hyp_handle_watchpt_low kvm_hyp_handle_memory_fault
static bool kvm_hyp_handle_watchpt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
__alias(kvm_hyp_handle_memory_fault);
static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code) static inline bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
{ {
if (kvm_hyp_handle_memory_fault(vcpu, exit_code)) if (kvm_hyp_handle_memory_fault(vcpu, exit_code))
return true; return true;
@ -658,23 +750,16 @@ static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
typedef bool (*exit_handler_fn)(struct kvm_vcpu *, u64 *); typedef bool (*exit_handler_fn)(struct kvm_vcpu *, u64 *);
static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu);
static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code);
/* /*
* Allow the hypervisor to handle the exit with an exit handler if it has one. * Allow the hypervisor to handle the exit with an exit handler if it has one.
* *
* Returns true if the hypervisor handled the exit, and control should go back * Returns true if the hypervisor handled the exit, and control should go back
* to the guest, or false if it hasn't. * to the guest, or false if it hasn't.
*/ */
static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code) static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code,
const exit_handler_fn *handlers)
{ {
const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu); exit_handler_fn fn = handlers[kvm_vcpu_trap_get_class(vcpu)];
exit_handler_fn fn;
fn = handlers[kvm_vcpu_trap_get_class(vcpu)];
if (fn) if (fn)
return fn(vcpu, exit_code); return fn(vcpu, exit_code);
@ -704,20 +789,9 @@ static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code
* the guest, false when we should restore the host state and return to the * the guest, false when we should restore the host state and return to the
* main run loop. * main run loop.
*/ */
static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) static inline bool __fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code,
const exit_handler_fn *handlers)
{ {
/*
* Save PSTATE early so that we can evaluate the vcpu mode
* early on.
*/
synchronize_vcpu_pstate(vcpu, exit_code);
/*
* Check whether we want to repaint the state one way or
* another.
*/
early_exit_filter(vcpu, exit_code);
if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
@ -747,7 +821,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
goto exit; goto exit;
/* Check if there's an exit handler and allow it to handle the exit. */ /* Check if there's an exit handler and allow it to handle the exit. */
if (kvm_hyp_handle_exit(vcpu, exit_code)) if (kvm_hyp_handle_exit(vcpu, exit_code, handlers))
goto guest; goto guest;
exit: exit:
/* Return to the host kernel and handle the exit */ /* Return to the host kernel and handle the exit */

View file

@ -5,6 +5,7 @@
*/ */
#include <hyp/adjust_pc.h> #include <hyp/adjust_pc.h>
#include <hyp/switch.h>
#include <asm/pgtable-types.h> #include <asm/pgtable-types.h>
#include <asm/kvm_asm.h> #include <asm/kvm_asm.h>
@ -83,7 +84,7 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
if (system_supports_sve()) if (system_supports_sve())
__hyp_sve_restore_host(); __hyp_sve_restore_host();
else else
__fpsimd_restore_state(*host_data_ptr(fpsimd_state)); __fpsimd_restore_state(host_data_ptr(host_ctxt.fp_regs));
if (has_fpmr) if (has_fpmr)
write_sysreg_s(*host_data_ptr(fpmr), SYS_FPMR); write_sysreg_s(*host_data_ptr(fpmr), SYS_FPMR);
@ -91,11 +92,34 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
*host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED; *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
} }
static void flush_debug_state(struct pkvm_hyp_vcpu *hyp_vcpu)
{
struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
hyp_vcpu->vcpu.arch.debug_owner = host_vcpu->arch.debug_owner;
if (kvm_guest_owns_debug_regs(&hyp_vcpu->vcpu))
hyp_vcpu->vcpu.arch.vcpu_debug_state = host_vcpu->arch.vcpu_debug_state;
else if (kvm_host_owns_debug_regs(&hyp_vcpu->vcpu))
hyp_vcpu->vcpu.arch.external_debug_state = host_vcpu->arch.external_debug_state;
}
static void sync_debug_state(struct pkvm_hyp_vcpu *hyp_vcpu)
{
struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
if (kvm_guest_owns_debug_regs(&hyp_vcpu->vcpu))
host_vcpu->arch.vcpu_debug_state = hyp_vcpu->vcpu.arch.vcpu_debug_state;
else if (kvm_host_owns_debug_regs(&hyp_vcpu->vcpu))
host_vcpu->arch.external_debug_state = hyp_vcpu->vcpu.arch.external_debug_state;
}
static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
{ {
struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
fpsimd_sve_flush(); fpsimd_sve_flush();
flush_debug_state(hyp_vcpu);
hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt; hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt;
@ -123,6 +147,7 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
unsigned int i; unsigned int i;
fpsimd_sve_sync(&hyp_vcpu->vcpu); fpsimd_sve_sync(&hyp_vcpu->vcpu);
sync_debug_state(hyp_vcpu);
host_vcpu->arch.ctxt = hyp_vcpu->vcpu.arch.ctxt; host_vcpu->arch.ctxt = hyp_vcpu->vcpu.arch.ctxt;
@ -200,8 +225,12 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
sync_hyp_vcpu(hyp_vcpu); sync_hyp_vcpu(hyp_vcpu);
} else { } else {
struct kvm_vcpu *vcpu = kern_hyp_va(host_vcpu);
/* The host is fully trusted, run its vCPU directly. */ /* The host is fully trusted, run its vCPU directly. */
ret = __kvm_vcpu_run(kern_hyp_va(host_vcpu)); fpsimd_lazy_switch_to_guest(vcpu);
ret = __kvm_vcpu_run(vcpu);
fpsimd_lazy_switch_to_host(vcpu);
} }
out: out:
cpu_reg(host_ctxt, 1) = ret; cpu_reg(host_ctxt, 1) = ret;
@ -651,12 +680,6 @@ void handle_trap(struct kvm_cpu_context *host_ctxt)
case ESR_ELx_EC_SMC64: case ESR_ELx_EC_SMC64:
handle_host_smc(host_ctxt); handle_host_smc(host_ctxt);
break; break;
case ESR_ELx_EC_SVE:
cpacr_clear_set(0, CPACR_EL1_ZEN);
isb();
sve_cond_update_zcr_vq(sve_vq_from_vl(kvm_host_sve_max_vl) - 1,
SYS_ZCR_EL2);
break;
case ESR_ELx_EC_IABT_LOW: case ESR_ELx_EC_IABT_LOW:
case ESR_ELx_EC_DABT_LOW: case ESR_ELx_EC_DABT_LOW:
handle_host_mem_abort(host_ctxt); handle_host_mem_abort(host_ctxt);

View file

@ -943,10 +943,10 @@ static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, u64 ip
ret = kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level); ret = kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level);
if (ret) if (ret)
return ret; return ret;
if (level != KVM_PGTABLE_LAST_LEVEL)
return -E2BIG;
if (!kvm_pte_valid(pte)) if (!kvm_pte_valid(pte))
return -ENOENT; return -ENOENT;
if (level != KVM_PGTABLE_LAST_LEVEL)
return -E2BIG;
state = guest_get_page_state(pte, ipa); state = guest_get_page_state(pte, ipa);
if (state != PKVM_PAGE_SHARED_BORROWED) if (state != PKVM_PAGE_SHARED_BORROWED)
@ -998,44 +998,57 @@ unlock:
return ret; return ret;
} }
int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot) static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa)
{ {
struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu);
u64 ipa = hyp_pfn_to_phys(gfn);
u64 phys; u64 phys;
int ret; int ret;
if (prot & ~KVM_PGTABLE_PROT_RWX) if (!IS_ENABLED(CONFIG_NVHE_EL2_DEBUG))
return -EINVAL; return;
host_lock_component(); host_lock_component();
guest_lock_component(vm); guest_lock_component(vm);
ret = __check_host_shared_guest(vm, &phys, ipa); ret = __check_host_shared_guest(vm, &phys, ipa);
if (!ret)
ret = kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0);
guest_unlock_component(vm); guest_unlock_component(vm);
host_unlock_component(); host_unlock_component();
WARN_ON(ret && ret != -ENOENT);
}
int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot)
{
struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu);
u64 ipa = hyp_pfn_to_phys(gfn);
int ret;
if (pkvm_hyp_vm_is_protected(vm))
return -EPERM;
if (prot & ~KVM_PGTABLE_PROT_RWX)
return -EINVAL;
assert_host_shared_guest(vm, ipa);
guest_lock_component(vm);
ret = kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0);
guest_unlock_component(vm);
return ret; return ret;
} }
int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm)
{ {
u64 ipa = hyp_pfn_to_phys(gfn); u64 ipa = hyp_pfn_to_phys(gfn);
u64 phys;
int ret; int ret;
host_lock_component(); if (pkvm_hyp_vm_is_protected(vm))
return -EPERM;
assert_host_shared_guest(vm, ipa);
guest_lock_component(vm); guest_lock_component(vm);
ret = kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE);
ret = __check_host_shared_guest(vm, &phys, ipa);
if (!ret)
ret = kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE);
guest_unlock_component(vm); guest_unlock_component(vm);
host_unlock_component();
return ret; return ret;
} }
@ -1043,18 +1056,15 @@ int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm)
int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm) int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm)
{ {
u64 ipa = hyp_pfn_to_phys(gfn); u64 ipa = hyp_pfn_to_phys(gfn);
u64 phys;
int ret; int ret;
host_lock_component(); if (pkvm_hyp_vm_is_protected(vm))
return -EPERM;
assert_host_shared_guest(vm, ipa);
guest_lock_component(vm); guest_lock_component(vm);
ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mkold);
ret = __check_host_shared_guest(vm, &phys, ipa);
if (!ret)
ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mkold);
guest_unlock_component(vm); guest_unlock_component(vm);
host_unlock_component();
return ret; return ret;
} }
@ -1063,18 +1073,14 @@ int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu)
{ {
struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu);
u64 ipa = hyp_pfn_to_phys(gfn); u64 ipa = hyp_pfn_to_phys(gfn);
u64 phys;
int ret;
host_lock_component(); if (pkvm_hyp_vm_is_protected(vm))
return -EPERM;
assert_host_shared_guest(vm, ipa);
guest_lock_component(vm); guest_lock_component(vm);
kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0);
ret = __check_host_shared_guest(vm, &phys, ipa);
if (!ret)
kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0);
guest_unlock_component(vm); guest_unlock_component(vm);
host_unlock_component();
return ret; return 0;
} }

View file

@ -39,6 +39,9 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
{ {
u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */ u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */
if (!guest_owns_fp_regs())
__activate_traps_fpsimd32(vcpu);
if (has_hvhe()) { if (has_hvhe()) {
val |= CPACR_EL1_TTA; val |= CPACR_EL1_TTA;
@ -47,6 +50,8 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
if (vcpu_has_sve(vcpu)) if (vcpu_has_sve(vcpu))
val |= CPACR_EL1_ZEN; val |= CPACR_EL1_ZEN;
} }
write_sysreg(val, cpacr_el1);
} else { } else {
val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1; val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1;
@ -61,12 +66,32 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
if (!guest_owns_fp_regs()) if (!guest_owns_fp_regs())
val |= CPTR_EL2_TFP; val |= CPTR_EL2_TFP;
write_sysreg(val, cptr_el2);
} }
}
if (!guest_owns_fp_regs()) static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)
__activate_traps_fpsimd32(vcpu); {
if (has_hvhe()) {
u64 val = CPACR_EL1_FPEN;
kvm_write_cptr_el2(val); if (cpus_have_final_cap(ARM64_SVE))
val |= CPACR_EL1_ZEN;
if (cpus_have_final_cap(ARM64_SME))
val |= CPACR_EL1_SMEN;
write_sysreg(val, cpacr_el1);
} else {
u64 val = CPTR_NVHE_EL2_RES1;
if (!cpus_have_final_cap(ARM64_SVE))
val |= CPTR_EL2_TZ;
if (!cpus_have_final_cap(ARM64_SME))
val |= CPTR_EL2_TSM;
write_sysreg(val, cptr_el2);
}
} }
static void __activate_traps(struct kvm_vcpu *vcpu) static void __activate_traps(struct kvm_vcpu *vcpu)
@ -119,7 +144,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
kvm_reset_cptr_el2(vcpu); __deactivate_cptr_traps(vcpu);
write_sysreg(__kvm_hyp_host_vector, vbar_el2); write_sysreg(__kvm_hyp_host_vector, vbar_el2);
} }
@ -192,34 +217,6 @@ static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)
kvm_handle_pvm_sysreg(vcpu, exit_code)); kvm_handle_pvm_sysreg(vcpu, exit_code));
} }
static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
{
/*
* Non-protected kvm relies on the host restoring its sve state.
* Protected kvm restores the host's sve state as not to reveal that
* fpsimd was used by a guest nor leak upper sve bits.
*/
if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {
__hyp_sve_save_host();
/* Re-enable SVE traps if not supported for the guest vcpu. */
if (!vcpu_has_sve(vcpu))
cpacr_clear_set(CPACR_EL1_ZEN, 0);
} else {
__fpsimd_save_state(*host_data_ptr(fpsimd_state));
}
if (kvm_has_fpmr(kern_hyp_va(vcpu->kvm))) {
u64 val = read_sysreg_s(SYS_FPMR);
if (unlikely(is_protected_kvm_enabled()))
*host_data_ptr(fpmr) = val;
else
**host_data_ptr(fpmr_ptr) = val;
}
}
static const exit_handler_fn hyp_exit_handlers[] = { static const exit_handler_fn hyp_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL, [0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
@ -251,19 +248,21 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu)
return hyp_exit_handlers; return hyp_exit_handlers;
} }
/* static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
* Some guests (e.g., protected VMs) are not be allowed to run in AArch32.
* The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
* guest from dropping to AArch32 EL0 if implemented by the CPU. If the
* hypervisor spots a guest in such a state ensure it is handled, and don't
* trust the host to spot or fix it. The check below is based on the one in
* kvm_arch_vcpu_ioctl_run().
*
* Returns false if the guest ran in AArch32 when it shouldn't have, and
* thus should exit to the host, or true if a the guest run loop can continue.
*/
static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)
{ {
const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu);
synchronize_vcpu_pstate(vcpu, exit_code);
/*
* Some guests (e.g., protected VMs) are not be allowed to run in
* AArch32. The ARMv8 architecture does not give the hypervisor a
* mechanism to prevent a guest from dropping to AArch32 EL0 if
* implemented by the CPU. If the hypervisor spots a guest in such a
* state ensure it is handled, and don't trust the host to spot or fix
* it. The check below is based on the one in
* kvm_arch_vcpu_ioctl_run().
*/
if (unlikely(vcpu_is_protected(vcpu) && vcpu_mode_is_32bit(vcpu))) { if (unlikely(vcpu_is_protected(vcpu) && vcpu_mode_is_32bit(vcpu))) {
/* /*
* As we have caught the guest red-handed, decide that it isn't * As we have caught the guest red-handed, decide that it isn't
@ -276,6 +275,8 @@ static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)
*exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT); *exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT);
*exit_code |= ARM_EXCEPTION_IL; *exit_code |= ARM_EXCEPTION_IL;
} }
return __fixup_guest_exit(vcpu, exit_code, handlers);
} }
/* Switch to the guest for legacy non-VHE systems */ /* Switch to the guest for legacy non-VHE systems */

View file

@ -136,6 +136,16 @@ write:
write_sysreg(val, cpacr_el1); write_sysreg(val, cpacr_el1);
} }
static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)
{
u64 val = CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN;
if (cpus_have_final_cap(ARM64_SME))
val |= CPACR_EL1_SMEN_EL1EN;
write_sysreg(val, cpacr_el1);
}
static void __activate_traps(struct kvm_vcpu *vcpu) static void __activate_traps(struct kvm_vcpu *vcpu)
{ {
u64 val; u64 val;
@ -207,7 +217,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
*/ */
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
kvm_reset_cptr_el2(vcpu); __deactivate_cptr_traps(vcpu);
if (!arm64_kernel_unmapped_at_el0()) if (!arm64_kernel_unmapped_at_el0())
host_vectors = __this_cpu_read(this_cpu_vector); host_vectors = __this_cpu_read(this_cpu_vector);
@ -413,14 +423,6 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
return true; return true;
} }
static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
{
__fpsimd_save_state(*host_data_ptr(fpsimd_state));
if (kvm_has_fpmr(vcpu->kvm))
**host_data_ptr(fpmr_ptr) = read_sysreg_s(SYS_FPMR);
}
static bool kvm_hyp_handle_tlbi_el2(struct kvm_vcpu *vcpu, u64 *exit_code) static bool kvm_hyp_handle_tlbi_el2(struct kvm_vcpu *vcpu, u64 *exit_code)
{ {
int ret = -EINVAL; int ret = -EINVAL;
@ -538,13 +540,10 @@ static const exit_handler_fn hyp_exit_handlers[] = {
[ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops,
}; };
static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu) static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
{ {
return hyp_exit_handlers; synchronize_vcpu_pstate(vcpu, exit_code);
}
static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)
{
/* /*
* If we were in HYP context on entry, adjust the PSTATE view * If we were in HYP context on entry, adjust the PSTATE view
* so that the usual helpers work correctly. * so that the usual helpers work correctly.
@ -564,6 +563,8 @@ static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)
*vcpu_cpsr(vcpu) &= ~(PSR_MODE_MASK | PSR_MODE32_BIT); *vcpu_cpsr(vcpu) &= ~(PSR_MODE_MASK | PSR_MODE32_BIT);
*vcpu_cpsr(vcpu) |= mode; *vcpu_cpsr(vcpu) |= mode;
} }
return __fixup_guest_exit(vcpu, exit_code, hyp_exit_handlers);
} }
/* Switch to the guest for VHE systems running in EL2 */ /* Switch to the guest for VHE systems running in EL2 */
@ -578,6 +579,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
sysreg_save_host_state_vhe(host_ctxt); sysreg_save_host_state_vhe(host_ctxt);
fpsimd_lazy_switch_to_guest(vcpu);
/* /*
* Note that ARM erratum 1165522 requires us to configure both stage 1 * Note that ARM erratum 1165522 requires us to configure both stage 1
* and stage 2 translation for the guest context before we clear * and stage 2 translation for the guest context before we clear
@ -602,6 +605,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
__deactivate_traps(vcpu); __deactivate_traps(vcpu);
fpsimd_lazy_switch_to_host(vcpu);
sysreg_restore_host_state_vhe(host_ctxt); sysreg_restore_host_state_vhe(host_ctxt);
if (guest_owns_fp_regs()) if (guest_owns_fp_regs())

View file

@ -67,26 +67,27 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
if (!tmp) if (!tmp)
return -ENOMEM; return -ENOMEM;
swap(kvm->arch.nested_mmus, tmp);
/* /*
* If we went through a realocation, adjust the MMU back-pointers in * If we went through a realocation, adjust the MMU back-pointers in
* the previously initialised kvm_pgtable structures. * the previously initialised kvm_pgtable structures.
*/ */
if (kvm->arch.nested_mmus != tmp) if (kvm->arch.nested_mmus != tmp)
for (int i = 0; i < kvm->arch.nested_mmus_size; i++) for (int i = 0; i < kvm->arch.nested_mmus_size; i++)
tmp[i].pgt->mmu = &tmp[i]; kvm->arch.nested_mmus[i].pgt->mmu = &kvm->arch.nested_mmus[i];
for (int i = kvm->arch.nested_mmus_size; !ret && i < num_mmus; i++) for (int i = kvm->arch.nested_mmus_size; !ret && i < num_mmus; i++)
ret = init_nested_s2_mmu(kvm, &tmp[i]); ret = init_nested_s2_mmu(kvm, &kvm->arch.nested_mmus[i]);
if (ret) { if (ret) {
for (int i = kvm->arch.nested_mmus_size; i < num_mmus; i++) for (int i = kvm->arch.nested_mmus_size; i < num_mmus; i++)
kvm_free_stage2_pgd(&tmp[i]); kvm_free_stage2_pgd(&kvm->arch.nested_mmus[i]);
return ret; return ret;
} }
kvm->arch.nested_mmus_size = num_mmus; kvm->arch.nested_mmus_size = num_mmus;
kvm->arch.nested_mmus = tmp;
return 0; return 0;
} }

View file

@ -1452,6 +1452,16 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
return true; return true;
} }
static bool access_hv_timer(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
if (!vcpu_el2_e2h_is_set(vcpu))
return undef_access(vcpu, p, r);
return access_arch_timer(vcpu, p, r);
}
static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp, static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp,
s64 new, s64 cur) s64 new, s64 cur)
{ {
@ -3103,9 +3113,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(CNTHP_CTL_EL2, access_arch_timer, reset_val, 0), EL2_REG(CNTHP_CTL_EL2, access_arch_timer, reset_val, 0),
EL2_REG(CNTHP_CVAL_EL2, access_arch_timer, reset_val, 0), EL2_REG(CNTHP_CVAL_EL2, access_arch_timer, reset_val, 0),
{ SYS_DESC(SYS_CNTHV_TVAL_EL2), access_arch_timer }, { SYS_DESC(SYS_CNTHV_TVAL_EL2), access_hv_timer },
EL2_REG(CNTHV_CTL_EL2, access_arch_timer, reset_val, 0), EL2_REG(CNTHV_CTL_EL2, access_hv_timer, reset_val, 0),
EL2_REG(CNTHV_CVAL_EL2, access_arch_timer, reset_val, 0), EL2_REG(CNTHV_CVAL_EL2, access_hv_timer, reset_val, 0),
{ SYS_DESC(SYS_CNTKCTL_EL12), access_cntkctl_el12 }, { SYS_DESC(SYS_CNTKCTL_EL12), access_cntkctl_el12 },

View file

@ -34,9 +34,9 @@
* *
* CPU Interface: * CPU Interface:
* *
* - kvm_vgic_vcpu_init(): initialization of static data that * - kvm_vgic_vcpu_init(): initialization of static data that doesn't depend
* doesn't depend on any sizing information or emulation type. No * on any sizing information. Private interrupts are allocated if not
* allocation is allowed there. * already allocated at vgic-creation time.
*/ */
/* EARLY INIT */ /* EARLY INIT */
@ -58,6 +58,8 @@ void kvm_vgic_early_init(struct kvm *kvm)
/* CREATION */ /* CREATION */
static int vgic_allocate_private_irqs_locked(struct kvm_vcpu *vcpu, u32 type);
/** /**
* kvm_vgic_create: triggered by the instantiation of the VGIC device by * kvm_vgic_create: triggered by the instantiation of the VGIC device by
* user space, either through the legacy KVM_CREATE_IRQCHIP ioctl (v2 only) * user space, either through the legacy KVM_CREATE_IRQCHIP ioctl (v2 only)
@ -112,6 +114,22 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
goto out_unlock; goto out_unlock;
} }
kvm_for_each_vcpu(i, vcpu, kvm) {
ret = vgic_allocate_private_irqs_locked(vcpu, type);
if (ret)
break;
}
if (ret) {
kvm_for_each_vcpu(i, vcpu, kvm) {
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
kfree(vgic_cpu->private_irqs);
vgic_cpu->private_irqs = NULL;
}
goto out_unlock;
}
kvm->arch.vgic.in_kernel = true; kvm->arch.vgic.in_kernel = true;
kvm->arch.vgic.vgic_model = type; kvm->arch.vgic.vgic_model = type;
@ -180,7 +198,7 @@ static int kvm_vgic_dist_init(struct kvm *kvm, unsigned int nr_spis)
return 0; return 0;
} }
static int vgic_allocate_private_irqs_locked(struct kvm_vcpu *vcpu) static int vgic_allocate_private_irqs_locked(struct kvm_vcpu *vcpu, u32 type)
{ {
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
int i; int i;
@ -218,17 +236,28 @@ static int vgic_allocate_private_irqs_locked(struct kvm_vcpu *vcpu)
/* PPIs */ /* PPIs */
irq->config = VGIC_CONFIG_LEVEL; irq->config = VGIC_CONFIG_LEVEL;
} }
switch (type) {
case KVM_DEV_TYPE_ARM_VGIC_V3:
irq->group = 1;
irq->mpidr = kvm_vcpu_get_mpidr_aff(vcpu);
break;
case KVM_DEV_TYPE_ARM_VGIC_V2:
irq->group = 0;
irq->targets = BIT(vcpu->vcpu_id);
break;
}
} }
return 0; return 0;
} }
static int vgic_allocate_private_irqs(struct kvm_vcpu *vcpu) static int vgic_allocate_private_irqs(struct kvm_vcpu *vcpu, u32 type)
{ {
int ret; int ret;
mutex_lock(&vcpu->kvm->arch.config_lock); mutex_lock(&vcpu->kvm->arch.config_lock);
ret = vgic_allocate_private_irqs_locked(vcpu); ret = vgic_allocate_private_irqs_locked(vcpu, type);
mutex_unlock(&vcpu->kvm->arch.config_lock); mutex_unlock(&vcpu->kvm->arch.config_lock);
return ret; return ret;
@ -258,7 +287,7 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
if (!irqchip_in_kernel(vcpu->kvm)) if (!irqchip_in_kernel(vcpu->kvm))
return 0; return 0;
ret = vgic_allocate_private_irqs(vcpu); ret = vgic_allocate_private_irqs(vcpu, dist->vgic_model);
if (ret) if (ret)
return ret; return ret;
@ -295,7 +324,7 @@ int vgic_init(struct kvm *kvm)
{ {
struct vgic_dist *dist = &kvm->arch.vgic; struct vgic_dist *dist = &kvm->arch.vgic;
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
int ret = 0, i; int ret = 0;
unsigned long idx; unsigned long idx;
lockdep_assert_held(&kvm->arch.config_lock); lockdep_assert_held(&kvm->arch.config_lock);
@ -315,35 +344,6 @@ int vgic_init(struct kvm *kvm)
if (ret) if (ret)
goto out; goto out;
/* Initialize groups on CPUs created before the VGIC type was known */
kvm_for_each_vcpu(idx, vcpu, kvm) {
ret = vgic_allocate_private_irqs_locked(vcpu);
if (ret)
goto out;
for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++) {
struct vgic_irq *irq = vgic_get_vcpu_irq(vcpu, i);
switch (dist->vgic_model) {
case KVM_DEV_TYPE_ARM_VGIC_V3:
irq->group = 1;
irq->mpidr = kvm_vcpu_get_mpidr_aff(vcpu);
break;
case KVM_DEV_TYPE_ARM_VGIC_V2:
irq->group = 0;
irq->targets = 1U << idx;
break;
default:
ret = -EINVAL;
}
vgic_put_irq(kvm, irq);
if (ret)
goto out;
}
}
/* /*
* If we have GICv4.1 enabled, unconditionally request enable the * If we have GICv4.1 enabled, unconditionally request enable the
* v4 support so that we get HW-accelerated vSGIs. Otherwise, only * v4 support so that we get HW-accelerated vSGIs. Otherwise, only

View file

@ -162,6 +162,13 @@ static int copy_p4d(struct trans_pgd_info *info, pgd_t *dst_pgdp,
unsigned long next; unsigned long next;
unsigned long addr = start; unsigned long addr = start;
if (pgd_none(READ_ONCE(*dst_pgdp))) {
dst_p4dp = trans_alloc(info);
if (!dst_p4dp)
return -ENOMEM;
pgd_populate(NULL, dst_pgdp, dst_p4dp);
}
dst_p4dp = p4d_offset(dst_pgdp, start); dst_p4dp = p4d_offset(dst_pgdp, start);
src_p4dp = p4d_offset(src_pgdp, start); src_p4dp = p4d_offset(src_pgdp, start);
do { do {

View file

@ -76,27 +76,6 @@ extern const char *__cpu_full_name[];
#define cpu_family_string() __cpu_family[raw_smp_processor_id()] #define cpu_family_string() __cpu_family[raw_smp_processor_id()]
#define cpu_full_name_string() __cpu_full_name[raw_smp_processor_id()] #define cpu_full_name_string() __cpu_full_name[raw_smp_processor_id()]
struct seq_file;
struct notifier_block;
extern int register_proc_cpuinfo_notifier(struct notifier_block *nb);
extern int proc_cpuinfo_notifier_call_chain(unsigned long val, void *v);
#define proc_cpuinfo_notifier(fn, pri) \
({ \
static struct notifier_block fn##_nb = { \
.notifier_call = fn, \
.priority = pri \
}; \
\
register_proc_cpuinfo_notifier(&fn##_nb); \
})
struct proc_cpuinfo_notifier_args {
struct seq_file *m;
unsigned long n;
};
static inline bool cpus_are_siblings(int cpua, int cpub) static inline bool cpus_are_siblings(int cpua, int cpub)
{ {
struct cpuinfo_loongarch *infoa = &cpu_data[cpua]; struct cpuinfo_loongarch *infoa = &cpu_data[cpua];

View file

@ -77,6 +77,8 @@ extern int __cpu_logical_map[NR_CPUS];
#define SMP_IRQ_WORK BIT(ACTION_IRQ_WORK) #define SMP_IRQ_WORK BIT(ACTION_IRQ_WORK)
#define SMP_CLEAR_VECTOR BIT(ACTION_CLEAR_VECTOR) #define SMP_CLEAR_VECTOR BIT(ACTION_CLEAR_VECTOR)
struct seq_file;
struct secondary_data { struct secondary_data {
unsigned long stack; unsigned long stack;
unsigned long thread_info; unsigned long thread_info;

View file

@ -18,16 +18,19 @@
.align 5 .align 5
SYM_FUNC_START(__arch_cpu_idle) SYM_FUNC_START(__arch_cpu_idle)
/* start of rollback region */ /* start of idle interrupt region */
LONG_L t0, tp, TI_FLAGS ori t0, zero, CSR_CRMD_IE
nop /* idle instruction needs irq enabled */
andi t0, t0, _TIF_NEED_RESCHED csrxchg t0, t0, LOONGARCH_CSR_CRMD
bnez t0, 1f /*
nop * If an interrupt lands here; between enabling interrupts above and
nop * going idle on the next instruction, we must *NOT* go idle since the
nop * interrupt could have set TIF_NEED_RESCHED or caused an timer to need
* reprogramming. Fall through -- see handle_vint() below -- and have
* the idle loop take care of things.
*/
idle 0 idle 0
/* end of rollback region */ /* end of idle interrupt region */
1: jr ra 1: jr ra
SYM_FUNC_END(__arch_cpu_idle) SYM_FUNC_END(__arch_cpu_idle)
@ -35,11 +38,10 @@ SYM_CODE_START(handle_vint)
UNWIND_HINT_UNDEFINED UNWIND_HINT_UNDEFINED
BACKUP_T0T1 BACKUP_T0T1
SAVE_ALL SAVE_ALL
la_abs t1, __arch_cpu_idle la_abs t1, 1b
LONG_L t0, sp, PT_ERA LONG_L t0, sp, PT_ERA
/* 32 byte rollback region */ /* 3 instructions idle interrupt region */
ori t0, t0, 0x1f ori t0, t0, 0b1100
xori t0, t0, 0x1f
bne t0, t1, 1f bne t0, t1, 1f
LONG_S t0, sp, PT_ERA LONG_S t0, sp, PT_ERA
1: move a0, sp 1: move a0, sp

View file

@ -11,7 +11,6 @@
void __cpuidle arch_cpu_idle(void) void __cpuidle arch_cpu_idle(void)
{ {
raw_local_irq_enable(); __arch_cpu_idle();
__arch_cpu_idle(); /* idle instruction needs irq enabled */
raw_local_irq_disable(); raw_local_irq_disable();
} }

View file

@ -13,28 +13,12 @@
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/time.h> #include <asm/time.h>
/*
* No lock; only written during early bootup by CPU 0.
*/
static RAW_NOTIFIER_HEAD(proc_cpuinfo_chain);
int __ref register_proc_cpuinfo_notifier(struct notifier_block *nb)
{
return raw_notifier_chain_register(&proc_cpuinfo_chain, nb);
}
int proc_cpuinfo_notifier_call_chain(unsigned long val, void *v)
{
return raw_notifier_call_chain(&proc_cpuinfo_chain, val, v);
}
static int show_cpuinfo(struct seq_file *m, void *v) static int show_cpuinfo(struct seq_file *m, void *v)
{ {
unsigned long n = (unsigned long) v - 1; unsigned long n = (unsigned long) v - 1;
unsigned int isa = cpu_data[n].isa_level; unsigned int isa = cpu_data[n].isa_level;
unsigned int version = cpu_data[n].processor_id & 0xff; unsigned int version = cpu_data[n].processor_id & 0xff;
unsigned int fp_version = cpu_data[n].fpu_vers; unsigned int fp_version = cpu_data[n].fpu_vers;
struct proc_cpuinfo_notifier_args proc_cpuinfo_notifier_args;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
if (!cpu_online(n)) if (!cpu_online(n))
@ -91,20 +75,13 @@ static int show_cpuinfo(struct seq_file *m, void *v)
if (cpu_has_lbt_mips) seq_printf(m, " lbt_mips"); if (cpu_has_lbt_mips) seq_printf(m, " lbt_mips");
seq_printf(m, "\n"); seq_printf(m, "\n");
seq_printf(m, "Hardware Watchpoint\t: %s", seq_printf(m, "Hardware Watchpoint\t: %s", str_yes_no(cpu_has_watch));
cpu_has_watch ? "yes, " : "no\n");
if (cpu_has_watch) { if (cpu_has_watch) {
seq_printf(m, "iwatch count: %d, dwatch count: %d\n", seq_printf(m, ", iwatch count: %d, dwatch count: %d",
cpu_data[n].watch_ireg_count, cpu_data[n].watch_dreg_count); cpu_data[n].watch_ireg_count, cpu_data[n].watch_dreg_count);
} }
proc_cpuinfo_notifier_args.m = m; seq_printf(m, "\n\n");
proc_cpuinfo_notifier_args.n = n;
raw_notifier_call_chain(&proc_cpuinfo_chain, 0,
&proc_cpuinfo_notifier_args);
seq_printf(m, "\n");
return 0; return 0;
} }

View file

@ -33,7 +33,7 @@ void machine_halt(void)
console_flush_on_panic(CONSOLE_FLUSH_PENDING); console_flush_on_panic(CONSOLE_FLUSH_PENDING);
while (true) { while (true) {
__arch_cpu_idle(); __asm__ __volatile__("idle 0" : : : "memory");
} }
} }
@ -53,7 +53,7 @@ void machine_power_off(void)
#endif #endif
while (true) { while (true) {
__arch_cpu_idle(); __asm__ __volatile__("idle 0" : : : "memory");
} }
} }
@ -74,6 +74,6 @@ void machine_restart(char *command)
acpi_reboot(); acpi_reboot();
while (true) { while (true) {
__arch_cpu_idle(); __asm__ __volatile__("idle 0" : : : "memory");
} }
} }

View file

@ -303,9 +303,9 @@ int kvm_arch_enable_virtualization_cpu(void)
* TOE=0: Trap on Exception. * TOE=0: Trap on Exception.
* TIT=0: Trap on Timer. * TIT=0: Trap on Timer.
*/ */
if (env & CSR_GCFG_GCIP_ALL) if (env & CSR_GCFG_GCIP_SECURE)
gcfg |= CSR_GCFG_GCI_SECURE; gcfg |= CSR_GCFG_GCI_SECURE;
if (env & CSR_GCFG_MATC_ROOT) if (env & CSR_GCFG_MATP_ROOT)
gcfg |= CSR_GCFG_MATC_ROOT; gcfg |= CSR_GCFG_MATC_ROOT;
write_csr_gcfg(gcfg); write_csr_gcfg(gcfg);

View file

@ -85,7 +85,7 @@
* Guest CRMD comes from separate GCSR_CRMD register * Guest CRMD comes from separate GCSR_CRMD register
*/ */
ori t0, zero, CSR_PRMD_PIE ori t0, zero, CSR_PRMD_PIE
csrxchg t0, t0, LOONGARCH_CSR_PRMD csrwr t0, LOONGARCH_CSR_PRMD
/* Set PVM bit to setup ertn to guest context */ /* Set PVM bit to setup ertn to guest context */
ori t0, zero, CSR_GSTAT_PVM ori t0, zero, CSR_GSTAT_PVM

View file

@ -1548,9 +1548,6 @@ static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
/* Restore timer state regardless */ /* Restore timer state regardless */
kvm_restore_timer(vcpu); kvm_restore_timer(vcpu);
/* Control guest page CCA attribute */
change_csr_gcfg(CSR_GCFG_MATC_MASK, CSR_GCFG_MATC_ROOT);
kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu); kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);
/* Restore hardware PMU CSRs */ /* Restore hardware PMU CSRs */

View file

@ -25,7 +25,7 @@ unsigned int __no_sanitize_address do_csum(const unsigned char *buff, int len)
const u64 *ptr; const u64 *ptr;
u64 data, sum64 = 0; u64 data, sum64 = 0;
if (unlikely(len == 0)) if (unlikely(len <= 0))
return 0; return 0;
offset = (unsigned long)buff & 7; offset = (unsigned long)buff & 7;

View file

@ -3,6 +3,7 @@
* Copyright (C) 2024 Loongson Technology Corporation Limited * Copyright (C) 2024 Loongson Technology Corporation Limited
*/ */
#include <linux/memblock.h>
#include <linux/pagewalk.h> #include <linux/pagewalk.h>
#include <linux/pgtable.h> #include <linux/pgtable.h>
#include <asm/set_memory.h> #include <asm/set_memory.h>
@ -167,7 +168,7 @@ bool kernel_page_present(struct page *page)
unsigned long addr = (unsigned long)page_address(page); unsigned long addr = (unsigned long)page_address(page);
if (addr < vm_map_base) if (addr < vm_map_base)
return true; return memblock_is_memory(__pa(addr));
pgd = pgd_offset_k(addr); pgd = pgd_offset_k(addr);
if (pgd_none(pgdp_get(pgd))) if (pgd_none(pgdp_get(pgd)))

View file

@ -27,8 +27,8 @@
*/ */
struct pt_regs { struct pt_regs {
#ifdef CONFIG_32BIT #ifdef CONFIG_32BIT
/* Pad bytes for argument save space on the stack. */ /* Saved syscall stack arguments; entries 0-3 unused. */
unsigned long pad0[8]; unsigned long args[8];
#endif #endif
/* Saved main processor registers. */ /* Saved main processor registers. */

View file

@ -57,37 +57,21 @@ static inline void mips_syscall_update_nr(struct task_struct *task,
static inline void mips_get_syscall_arg(unsigned long *arg, static inline void mips_get_syscall_arg(unsigned long *arg,
struct task_struct *task, struct pt_regs *regs, unsigned int n) struct task_struct *task, struct pt_regs *regs, unsigned int n)
{ {
unsigned long usp __maybe_unused = regs->regs[29]; #ifdef CONFIG_32BIT
switch (n) { switch (n) {
case 0: case 1: case 2: case 3: case 0: case 1: case 2: case 3:
*arg = regs->regs[4 + n]; *arg = regs->regs[4 + n];
return; return;
#ifdef CONFIG_32BIT
case 4: case 5: case 6: case 7: case 4: case 5: case 6: case 7:
get_user(*arg, (int *)usp + n); *arg = regs->args[n];
return; return;
#endif
#ifdef CONFIG_64BIT
case 4: case 5: case 6: case 7:
#ifdef CONFIG_MIPS32_O32
if (test_tsk_thread_flag(task, TIF_32BIT_REGS))
get_user(*arg, (int *)usp + n);
else
#endif
*arg = regs->regs[4 + n];
return;
#endif
default:
BUG();
} }
#else
unreachable(); *arg = regs->regs[4 + n];
if ((IS_ENABLED(CONFIG_MIPS32_O32) &&
test_tsk_thread_flag(task, TIF_32BIT_REGS)))
*arg = (unsigned int)*arg;
#endif
} }
static inline long syscall_get_error(struct task_struct *task, static inline long syscall_get_error(struct task_struct *task,

View file

@ -27,6 +27,12 @@ void output_ptreg_defines(void);
void output_ptreg_defines(void) void output_ptreg_defines(void)
{ {
COMMENT("MIPS pt_regs offsets."); COMMENT("MIPS pt_regs offsets.");
#ifdef CONFIG_32BIT
OFFSET(PT_ARG4, pt_regs, args[4]);
OFFSET(PT_ARG5, pt_regs, args[5]);
OFFSET(PT_ARG6, pt_regs, args[6]);
OFFSET(PT_ARG7, pt_regs, args[7]);
#endif
OFFSET(PT_R0, pt_regs, regs[0]); OFFSET(PT_R0, pt_regs, regs[0]);
OFFSET(PT_R1, pt_regs, regs[1]); OFFSET(PT_R1, pt_regs, regs[1]);
OFFSET(PT_R2, pt_regs, regs[2]); OFFSET(PT_R2, pt_regs, regs[2]);

View file

@ -64,10 +64,10 @@ load_a6: user_lw(t7, 24(t0)) # argument #7 from usp
load_a7: user_lw(t8, 28(t0)) # argument #8 from usp load_a7: user_lw(t8, 28(t0)) # argument #8 from usp
loads_done: loads_done:
sw t5, 16(sp) # argument #5 to ksp sw t5, PT_ARG4(sp) # argument #5 to ksp
sw t6, 20(sp) # argument #6 to ksp sw t6, PT_ARG5(sp) # argument #6 to ksp
sw t7, 24(sp) # argument #7 to ksp sw t7, PT_ARG6(sp) # argument #7 to ksp
sw t8, 28(sp) # argument #8 to ksp sw t8, PT_ARG7(sp) # argument #8 to ksp
.set pop .set pop
.section __ex_table,"a" .section __ex_table,"a"

View file

@ -77,9 +77,17 @@
/* /*
* With 4K page size the real_pte machinery is all nops. * With 4K page size the real_pte machinery is all nops.
*/ */
#define __real_pte(e, p, o) ((real_pte_t){(e)}) static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep, int offset)
{
return (real_pte_t){pte};
}
#define __rpte_to_pte(r) ((r).pte) #define __rpte_to_pte(r) ((r).pte)
#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >> H_PAGE_F_GIX_SHIFT)
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
{
return pte_val(__rpte_to_pte(rpte)) >> H_PAGE_F_GIX_SHIFT;
}
#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \ #define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \
do { \ do { \

View file

@ -108,7 +108,7 @@ static int text_area_cpu_up(unsigned int cpu)
unsigned long addr; unsigned long addr;
int err; int err;
area = get_vm_area(PAGE_SIZE, VM_ALLOC); area = get_vm_area(PAGE_SIZE, 0);
if (!area) { if (!area) {
WARN_ONCE(1, "Failed to create text area for cpu %d\n", WARN_ONCE(1, "Failed to create text area for cpu %d\n",
cpu); cpu);
@ -493,7 +493,9 @@ static int __do_patch_instructions_mm(u32 *addr, u32 *code, size_t len, bool rep
orig_mm = start_using_temp_mm(patching_mm); orig_mm = start_using_temp_mm(patching_mm);
kasan_disable_current();
err = __patch_instructions(patch_addr, code, len, repeat_instr); err = __patch_instructions(patch_addr, code, len, repeat_instr);
kasan_enable_current();
/* context synchronisation performed by __patch_instructions */ /* context synchronisation performed by __patch_instructions */
stop_using_temp_mm(patching_mm, orig_mm); stop_using_temp_mm(patching_mm, orig_mm);

View file

@ -75,7 +75,7 @@ static void fsl_msi_print_chip(struct irq_data *irqd, struct seq_file *p)
srs = (hwirq >> msi_data->srs_shift) & MSI_SRS_MASK; srs = (hwirq >> msi_data->srs_shift) & MSI_SRS_MASK;
cascade_virq = msi_data->cascade_array[srs]->virq; cascade_virq = msi_data->cascade_array[srs]->virq;
seq_printf(p, " fsl-msi-%d", cascade_virq); seq_printf(p, "fsl-msi-%d", cascade_virq);
} }

View file

@ -86,7 +86,7 @@ static int cmma_test_essa(void)
: [reg1] "=&d" (reg1), : [reg1] "=&d" (reg1),
[reg2] "=&a" (reg2), [reg2] "=&a" (reg2),
[rc] "+&d" (rc), [rc] "+&d" (rc),
[tmp] "=&d" (tmp), [tmp] "+&d" (tmp),
"+Q" (get_lowcore()->program_new_psw), "+Q" (get_lowcore()->program_new_psw),
"=Q" (old) "=Q" (old)
: [psw_old] "a" (&old), : [psw_old] "a" (&old),

View file

@ -469,6 +469,7 @@ CONFIG_SCSI_DH_ALUA=m
CONFIG_MD=y CONFIG_MD=y
CONFIG_BLK_DEV_MD=y CONFIG_BLK_DEV_MD=y
# CONFIG_MD_BITMAP_FILE is not set # CONFIG_MD_BITMAP_FILE is not set
CONFIG_MD_LINEAR=m
CONFIG_MD_CLUSTER=m CONFIG_MD_CLUSTER=m
CONFIG_BCACHE=m CONFIG_BCACHE=m
CONFIG_BLK_DEV_DM=y CONFIG_BLK_DEV_DM=y
@ -740,7 +741,6 @@ CONFIG_IMA=y
CONFIG_IMA_DEFAULT_HASH_SHA256=y CONFIG_IMA_DEFAULT_HASH_SHA256=y
CONFIG_IMA_WRITE_POLICY=y CONFIG_IMA_WRITE_POLICY=y
CONFIG_IMA_APPRAISE=y CONFIG_IMA_APPRAISE=y
CONFIG_LSM="yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"
CONFIG_BUG_ON_DATA_CORRUPTION=y CONFIG_BUG_ON_DATA_CORRUPTION=y
CONFIG_CRYPTO_USER=m CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set # CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
@ -875,6 +875,7 @@ CONFIG_RCU_CPU_STALL_TIMEOUT=300
CONFIG_LATENCYTOP=y CONFIG_LATENCYTOP=y
CONFIG_BOOTTIME_TRACING=y CONFIG_BOOTTIME_TRACING=y
CONFIG_FUNCTION_GRAPH_RETVAL=y CONFIG_FUNCTION_GRAPH_RETVAL=y
CONFIG_FUNCTION_GRAPH_RETADDR=y
CONFIG_FPROBE=y CONFIG_FPROBE=y
CONFIG_FUNCTION_PROFILER=y CONFIG_FUNCTION_PROFILER=y
CONFIG_STACK_TRACER=y CONFIG_STACK_TRACER=y

View file

@ -459,6 +459,7 @@ CONFIG_SCSI_DH_ALUA=m
CONFIG_MD=y CONFIG_MD=y
CONFIG_BLK_DEV_MD=y CONFIG_BLK_DEV_MD=y
# CONFIG_MD_BITMAP_FILE is not set # CONFIG_MD_BITMAP_FILE is not set
CONFIG_MD_LINEAR=m
CONFIG_MD_CLUSTER=m CONFIG_MD_CLUSTER=m
CONFIG_BCACHE=m CONFIG_BCACHE=m
CONFIG_BLK_DEV_DM=y CONFIG_BLK_DEV_DM=y
@ -725,7 +726,6 @@ CONFIG_IMA=y
CONFIG_IMA_DEFAULT_HASH_SHA256=y CONFIG_IMA_DEFAULT_HASH_SHA256=y
CONFIG_IMA_WRITE_POLICY=y CONFIG_IMA_WRITE_POLICY=y
CONFIG_IMA_APPRAISE=y CONFIG_IMA_APPRAISE=y
CONFIG_LSM="yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"
CONFIG_BUG_ON_DATA_CORRUPTION=y CONFIG_BUG_ON_DATA_CORRUPTION=y
CONFIG_CRYPTO_FIPS=y CONFIG_CRYPTO_FIPS=y
CONFIG_CRYPTO_USER=m CONFIG_CRYPTO_USER=m
@ -826,6 +826,7 @@ CONFIG_RCU_CPU_STALL_TIMEOUT=60
CONFIG_LATENCYTOP=y CONFIG_LATENCYTOP=y
CONFIG_BOOTTIME_TRACING=y CONFIG_BOOTTIME_TRACING=y
CONFIG_FUNCTION_GRAPH_RETVAL=y CONFIG_FUNCTION_GRAPH_RETVAL=y
CONFIG_FUNCTION_GRAPH_RETADDR=y
CONFIG_FPROBE=y CONFIG_FPROBE=y
CONFIG_FUNCTION_PROFILER=y CONFIG_FUNCTION_PROFILER=y
CONFIG_STACK_TRACER=y CONFIG_STACK_TRACER=y

View file

@ -62,7 +62,6 @@ CONFIG_ZFCP=y
# CONFIG_INOTIFY_USER is not set # CONFIG_INOTIFY_USER is not set
# CONFIG_MISC_FILESYSTEMS is not set # CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set # CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_LSM="yama,loadpin,safesetid,integrity"
# CONFIG_ZLIB_DFLTCC is not set # CONFIG_ZLIB_DFLTCC is not set
CONFIG_XZ_DEC_MICROLZMA=y CONFIG_XZ_DEC_MICROLZMA=y
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y

Some files were not shown because too many files have changed in this diff Show more