linux/arch/arm/include/asm/uaccess.h

611 lines
17 KiB
C
Raw Permalink Normal View History

/* SPDX-License-Identifier: GPL-2.0-only */
/*
* arch/arm/include/asm/uaccess.h
*/
#ifndef _ASMARM_UACCESS_H
#define _ASMARM_UACCESS_H
/*
* User space memory access functions
*/
#include <linux/kernel.h>
#include <linux/string.h>
ARM: mm: Make virt_to_pfn() a static inline Making virt_to_pfn() a static inline taking a strongly typed (const void *) makes the contract of a passing a pointer of that type to the function explicit and exposes any misuse of the macro virt_to_pfn() acting polymorphic and accepting many types such as (void *), (unitptr_t) or (unsigned long) as arguments without warnings. Doing this is a bit intrusive: virt_to_pfn() requires PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in <asm/page.h>, so this must be included *before* <asm/memory.h>. The use of macros were obscuring the unclear inclusion order here, as the macros would eventually be resolved, but a static inline like this cannot be compiled with unresolved macros. The naive solution to include <asm/page.h> at the top of <asm/memory.h> does not work, because <asm/memory.h> sometimes includes <asm/page.h> at the end of itself, which would create a confusing inclusion loop. So instead, take the approach to always unconditionally include <asm/page.h> at the end of <asm/memory.h> arch/arm uses <asm/memory.h> explicitly in a lot of places, however it turns out that if we just unconditionally include <asm/memory.h> into <asm/page.h> and switch all inclusions of <asm/memory.h> to <asm/page.h> instead, we enforce the right order and <asm/memory.h> will always have access to the definitions. Put an inclusion guard in place making it impossible to include <asm/memory.h> explicitly. Link: https://lore.kernel.org/linux-mm/20220701160004.2ffff4e5ab59a55499f4c736@linux-foundation.org/ Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2022-06-02 10:18:32 +02:00
#include <asm/page.h>
#include <asm/domain.h>
#include <linux/unaligned.h>
#include <asm/unified.h>
ARM: 9358/2: Implement PAN for LPAE by TTBR0 page table walks disablement With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. As part of the change we are using preprocessor elif definied() clauses so balance these clauses by converting relevant precedingt ifdef clauses to if defined() clauses. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-03-25 08:31:13 +01:00
#include <asm/pgtable.h>
#include <asm/proc-fns.h>
#include <asm/compiler.h>
#include <asm/extable.h>
/*
* These two functions allow hooking accesses to userspace to increase
* system integrity by ensuring that the kernel can not inadvertantly
* perform such accesses (eg, via list poison values) which could then
* be exploited for priviledge escalation.
*/
ARM: 9358/2: Implement PAN for LPAE by TTBR0 page table walks disablement With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. As part of the change we are using preprocessor elif definied() clauses so balance these clauses by converting relevant precedingt ifdef clauses to if defined() clauses. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-03-25 08:31:13 +01:00
#if defined(CONFIG_CPU_SW_DOMAIN_PAN)
ARM: 8908/1: add __always_inline to functions called from __get_user_check() KernelCI reports that bcm2835_defconfig is no longer booting since commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") (https://lkml.org/lkml/2019/9/26/825). I also received a regression report from Nicolas Saenz Julienne (https://lkml.org/lkml/2019/9/27/263). This problem has cropped up on bcm2835_defconfig because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends to prefer not inlining functions with -Os. I was able to reproduce it with other boards and defconfig files by manually enabling CONFIG_CC_OPTIMIZE_FOR_SIZE. The __get_user_check() specifically uses r0, r1, r2 registers. So, uaccess_save_and_enable() and uaccess_restore() must be inlined. Otherwise, those register assignments would be entirely dropped, according to my analysis of the disassembly. Prior to commit 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for inlining functions, except on x86. Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. So, __always_inline is now the only guaranteed way of forcible inlining. I added __always_inline to 4 functions in the call-graph from the __get_user_check() macro. Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") Reported-by: "kernelci.org bot" <bot@kernelci.org> Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-10-02 11:28:02 +01:00
static __always_inline unsigned int uaccess_save_and_enable(void)
{
unsigned int old_domain = get_domain();
/* Set the current domain access to permit user accesses */
set_domain((old_domain & ~domain_mask(DOMAIN_USER)) |
domain_val(DOMAIN_USER, DOMAIN_CLIENT));
return old_domain;
}
ARM: 8908/1: add __always_inline to functions called from __get_user_check() KernelCI reports that bcm2835_defconfig is no longer booting since commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") (https://lkml.org/lkml/2019/9/26/825). I also received a regression report from Nicolas Saenz Julienne (https://lkml.org/lkml/2019/9/27/263). This problem has cropped up on bcm2835_defconfig because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends to prefer not inlining functions with -Os. I was able to reproduce it with other boards and defconfig files by manually enabling CONFIG_CC_OPTIMIZE_FOR_SIZE. The __get_user_check() specifically uses r0, r1, r2 registers. So, uaccess_save_and_enable() and uaccess_restore() must be inlined. Otherwise, those register assignments would be entirely dropped, according to my analysis of the disassembly. Prior to commit 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for inlining functions, except on x86. Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. So, __always_inline is now the only guaranteed way of forcible inlining. I added __always_inline to 4 functions in the call-graph from the __get_user_check() macro. Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") Reported-by: "kernelci.org bot" <bot@kernelci.org> Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-10-02 11:28:02 +01:00
static __always_inline void uaccess_restore(unsigned int flags)
{
/* Restore the user access mask */
set_domain(flags);
}
ARM: 9358/2: Implement PAN for LPAE by TTBR0 page table walks disablement With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. As part of the change we are using preprocessor elif definied() clauses so balance these clauses by converting relevant precedingt ifdef clauses to if defined() clauses. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-03-25 08:31:13 +01:00
#elif defined(CONFIG_CPU_TTBR0_PAN)
static __always_inline unsigned int uaccess_save_and_enable(void)
ARM: 9358/2: Implement PAN for LPAE by TTBR0 page table walks disablement With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. As part of the change we are using preprocessor elif definied() clauses so balance these clauses by converting relevant precedingt ifdef clauses to if defined() clauses. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-03-25 08:31:13 +01:00
{
unsigned int old_ttbcr = cpu_get_ttbcr();
/*
* Enable TTBR0 page table walks (T0SZ = 0, EDP0 = 0) and ASID from
* TTBR0 (A1 = 0).
*/
cpu_set_ttbcr(old_ttbcr & ~(TTBCR_A1 | TTBCR_EPD0 | TTBCR_T0SZ_MASK));
isb();
return old_ttbcr;
}
static inline void uaccess_restore(unsigned int flags)
{
cpu_set_ttbcr(flags);
isb();
}
#else
static inline unsigned int uaccess_save_and_enable(void)
{
return 0;
}
static inline void uaccess_restore(unsigned int flags)
{
}
#endif
/*
* These two are intentionally not defined anywhere - if the kernel
* code generates any references to them, that's a bug.
*/
extern int __get_user_bad(void);
extern int __put_user_bad(void);
#ifdef CONFIG_MMU
/*
* This is a type: either unsigned long, if the argument fits into
* that type, or otherwise unsigned long long.
*/
#define __inttype(x) \
__typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
/*
* Sanitise a uaccess pointer such that it becomes NULL if addr+size
* is above the current addr_limit.
*/
#define uaccess_mask_range_ptr(ptr, size) \
((__typeof__(ptr))__uaccess_mask_range_ptr(ptr, size))
static inline void __user *__uaccess_mask_range_ptr(const void __user *ptr,
size_t size)
{
void __user *safe_ptr = (void __user *)ptr;
unsigned long tmp;
asm volatile(
" .syntax unified\n"
" sub %1, %3, #1\n"
" subs %1, %1, %0\n"
" addhs %1, %1, #1\n"
" subshs %1, %1, %2\n"
" movlo %0, #0\n"
: "+r" (safe_ptr), "=&r" (tmp)
: "r" (size), "r" (TASK_SIZE)
: "cc");
csdb();
return safe_ptr;
}
/*
* Single-value transfer routines. They automatically use the right
* size if we just have the right pointer type. Note that the functions
* which read from user space (*get_*) need to take care not to leak
* kernel data even if the calling code is buggy and fails to check
* the return value. This means zeroing out the destination variable
* or buffer on error. Normally this is done out of line by the
* fixup code, but there are a few places where it intrudes on the
* main code path. When we only write to user space, there is no
* problem.
*/
extern int __get_user_1(void *);
extern int __get_user_2(void *);
extern int __get_user_4(void *);
2014-09-04 06:07:33 +01:00
extern int __get_user_32t_8(void *);
ARM: 8091/2: add get_user() support for 8 byte types Recent contributions, including to DRM and binder, introduce 64-bit values in their interfaces. A common motivation for this is to allow the same ABI for 32- and 64-bit userspaces (and therefore also a shared ABI for 32/64 hybrid userspaces). Anyhow, the developers would like to avoid gotchas like having to use copy_from_user(). This feature is already implemented on x86-32 and the majority of other 32-bit architectures. The current list of get_user_8 hold out architectures are: arm, avr32, blackfin, m32r, metag, microblaze, mn10300, sh. Credit: My name sits rather uneasily at the top of this patch. The v1 and v2 versions of the patch were written by Rob Clark and to produce v4 I mostly copied code from Russell King and H. Peter Anvin. However I have mangled the patch sufficiently that *blame* is rightfully mine even if credit should more widely shared. Changelog: v5: updated to use the ret macro (requested by Russell King) v4: remove an inlined add on big endian systems (spotted by Russell King), used __ARMEB__ rather than BIG_ENDIAN (to match rest of file), cleared r3 on EFAULT during __get_user_8. v3: fix a couple of checkpatch issues v2: pass correct size to check_uaccess, and better handling of narrowing double word read with __get_user_xb() (Russell King's suggestion) v1: original Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-10 20:58:08 +01:00
extern int __get_user_8(void *);
2014-09-04 06:07:33 +01:00
extern int __get_user_64t_1(void *);
extern int __get_user_64t_2(void *);
extern int __get_user_64t_4(void *);
#define __get_user_x(__r2, __p, __e, __l, __s) \
__asm__ __volatile__ ( \
__asmeq("%0", "r0") __asmeq("%1", "r2") \
__asmeq("%3", "r1") \
"bl __get_user_" #__s \
: "=&r" (__e), "=r" (__r2) \
: "0" (__p), "r" (__l) \
: "ip", "lr", "cc")
ARM: 8091/2: add get_user() support for 8 byte types Recent contributions, including to DRM and binder, introduce 64-bit values in their interfaces. A common motivation for this is to allow the same ABI for 32- and 64-bit userspaces (and therefore also a shared ABI for 32/64 hybrid userspaces). Anyhow, the developers would like to avoid gotchas like having to use copy_from_user(). This feature is already implemented on x86-32 and the majority of other 32-bit architectures. The current list of get_user_8 hold out architectures are: arm, avr32, blackfin, m32r, metag, microblaze, mn10300, sh. Credit: My name sits rather uneasily at the top of this patch. The v1 and v2 versions of the patch were written by Rob Clark and to produce v4 I mostly copied code from Russell King and H. Peter Anvin. However I have mangled the patch sufficiently that *blame* is rightfully mine even if credit should more widely shared. Changelog: v5: updated to use the ret macro (requested by Russell King) v4: remove an inlined add on big endian systems (spotted by Russell King), used __ARMEB__ rather than BIG_ENDIAN (to match rest of file), cleared r3 on EFAULT during __get_user_8. v3: fix a couple of checkpatch issues v2: pass correct size to check_uaccess, and better handling of narrowing double word read with __get_user_xb() (Russell King's suggestion) v1: original Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-10 20:58:08 +01:00
/* narrowing a double-word get into a single 32bit word register: */
#ifdef __ARMEB__
#define __get_user_x_32t(__r2, __p, __e, __l, __s) \
2014-09-04 06:07:33 +01:00
__get_user_x(__r2, __p, __e, __l, 32t_8)
ARM: 8091/2: add get_user() support for 8 byte types Recent contributions, including to DRM and binder, introduce 64-bit values in their interfaces. A common motivation for this is to allow the same ABI for 32- and 64-bit userspaces (and therefore also a shared ABI for 32/64 hybrid userspaces). Anyhow, the developers would like to avoid gotchas like having to use copy_from_user(). This feature is already implemented on x86-32 and the majority of other 32-bit architectures. The current list of get_user_8 hold out architectures are: arm, avr32, blackfin, m32r, metag, microblaze, mn10300, sh. Credit: My name sits rather uneasily at the top of this patch. The v1 and v2 versions of the patch were written by Rob Clark and to produce v4 I mostly copied code from Russell King and H. Peter Anvin. However I have mangled the patch sufficiently that *blame* is rightfully mine even if credit should more widely shared. Changelog: v5: updated to use the ret macro (requested by Russell King) v4: remove an inlined add on big endian systems (spotted by Russell King), used __ARMEB__ rather than BIG_ENDIAN (to match rest of file), cleared r3 on EFAULT during __get_user_8. v3: fix a couple of checkpatch issues v2: pass correct size to check_uaccess, and better handling of narrowing double word read with __get_user_xb() (Russell King's suggestion) v1: original Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-10 20:58:08 +01:00
#else
2014-09-04 06:07:33 +01:00
#define __get_user_x_32t __get_user_x
ARM: 8091/2: add get_user() support for 8 byte types Recent contributions, including to DRM and binder, introduce 64-bit values in their interfaces. A common motivation for this is to allow the same ABI for 32- and 64-bit userspaces (and therefore also a shared ABI for 32/64 hybrid userspaces). Anyhow, the developers would like to avoid gotchas like having to use copy_from_user(). This feature is already implemented on x86-32 and the majority of other 32-bit architectures. The current list of get_user_8 hold out architectures are: arm, avr32, blackfin, m32r, metag, microblaze, mn10300, sh. Credit: My name sits rather uneasily at the top of this patch. The v1 and v2 versions of the patch were written by Rob Clark and to produce v4 I mostly copied code from Russell King and H. Peter Anvin. However I have mangled the patch sufficiently that *blame* is rightfully mine even if credit should more widely shared. Changelog: v5: updated to use the ret macro (requested by Russell King) v4: remove an inlined add on big endian systems (spotted by Russell King), used __ARMEB__ rather than BIG_ENDIAN (to match rest of file), cleared r3 on EFAULT during __get_user_8. v3: fix a couple of checkpatch issues v2: pass correct size to check_uaccess, and better handling of narrowing double word read with __get_user_xb() (Russell King's suggestion) v1: original Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-10 20:58:08 +01:00
#endif
2014-09-04 06:07:33 +01:00
/*
* storing result into proper least significant word of 64bit target var,
* different only for big endian case where 64 bit __r2 lsw is r3:
*/
#ifdef __ARMEB__
#define __get_user_x_64t(__r2, __p, __e, __l, __s) \
__asm__ __volatile__ ( \
__asmeq("%0", "r0") __asmeq("%1", "r2") \
__asmeq("%3", "r1") \
"bl __get_user_64t_" #__s \
: "=&r" (__e), "=r" (__r2) \
: "0" (__p), "r" (__l) \
: "ip", "lr", "cc")
2014-09-04 06:07:33 +01:00
#else
#define __get_user_x_64t __get_user_x
#endif
#define __get_user_check(x, p) \
({ \
unsigned long __limit = TASK_SIZE - 1; \
register typeof(*(p)) __user *__p asm("r0") = (p); \
register __inttype(x) __r2 asm("r2"); \
register unsigned long __l asm("r1") = __limit; \
register int __e asm("r0"); \
unsigned int __ua_flags = uaccess_save_and_enable(); \
ARM: 9132/1: Fix __get_user_check failure with ARM KASAN images ARM: kasan: Fix __get_user_check failure with kasan In macro __get_user_check defined in arch/arm/include/asm/uaccess.h, error code is store in register int __e(r0). When kasan is enabled, assigning value to kernel address might trigger kasan check, which unexpectedly overwrites r0 and causes undefined behavior on arm kasan images. One example is failure in do_futex and results in process soft lockup. Log: watchdog: BUG: soft lockup - CPU#0 stuck for 62946ms! [rs:main Q:Reg:1151] ... (__asan_store4) from (futex_wait_setup+0xf8/0x2b4) (futex_wait_setup) from (futex_wait+0x138/0x394) (futex_wait) from (do_futex+0x164/0xe40) (do_futex) from (sys_futex_time32+0x178/0x230) (sys_futex_time32) from (ret_fast_syscall+0x0/0x50) The soft lockup happens in function futex_wait_setup. The reason is function get_futex_value_locked always return EINVAL, thus pc jump back to retry label and causes looping. This line in function get_futex_value_locked ret = __get_user(*dest, from); is expanded to *dest = (typeof(*(p))) __r2; , in macro __get_user_check. Writing to pointer dest triggers kasan check and overwrites the return value of __get_user_x function. The assembly code of get_futex_value_locked in kernel/futex.c: ... c01f6dc8: eb0b020e bl c04b7608 <__get_user_4> // "x = (typeof(*(p))) __r2;" triggers kasan check and r0 is overwritten c01f6dCc: e1a00007 mov r0, r7 c01f6dd0: e1a05002 mov r5, r2 c01f6dd4: eb04f1e6 bl c0333574 <__asan_store4> c01f6dd8: e5875000 str r5, [r7] // save ret value of __get_user(*dest, from), which is dest address now c01f6ddc: e1a05000 mov r5, r0 ... // checking return value of __get_user failed c01f6e00: e3550000 cmp r5, #0 ... c01f6e0c: 01a00005 moveq r0, r5 // assign return value to EINVAL c01f6e10: 13e0000d mvnne r0, #13 Return value is the destination address of get_user thus certainly non-zero, so get_futex_value_locked always return EINVAL. Fix it by using a tmp vairable to store the error code before the assignment. This fix has no effects to non-kasan images thanks to compiler optimization. It only affects cases that overwrite r0 due to kasan check. This should fix bug discussed in Link: [1] https://lore.kernel.org/linux-arm-kernel/0ef7c2a5-5d8b-c5e0-63fa-31693fd4495c@gmail.com/ Fixes: 421015713b30 ("ARM: 9017/2: Enable KASan for ARM") Signed-off-by: Lexi Shao <shaolexi@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-09-23 03:41:25 +01:00
int __tmp_e; \
switch (sizeof(*(__p))) { \
case 1: \
2014-09-04 06:07:33 +01:00
if (sizeof((x)) >= 8) \
__get_user_x_64t(__r2, __p, __e, __l, 1); \
else \
__get_user_x(__r2, __p, __e, __l, 1); \
break; \
case 2: \
2014-09-04 06:07:33 +01:00
if (sizeof((x)) >= 8) \
__get_user_x_64t(__r2, __p, __e, __l, 2); \
else \
__get_user_x(__r2, __p, __e, __l, 2); \
break; \
case 4: \
2014-09-04 06:07:33 +01:00
if (sizeof((x)) >= 8) \
__get_user_x_64t(__r2, __p, __e, __l, 4); \
else \
__get_user_x(__r2, __p, __e, __l, 4); \
break; \
ARM: 8091/2: add get_user() support for 8 byte types Recent contributions, including to DRM and binder, introduce 64-bit values in their interfaces. A common motivation for this is to allow the same ABI for 32- and 64-bit userspaces (and therefore also a shared ABI for 32/64 hybrid userspaces). Anyhow, the developers would like to avoid gotchas like having to use copy_from_user(). This feature is already implemented on x86-32 and the majority of other 32-bit architectures. The current list of get_user_8 hold out architectures are: arm, avr32, blackfin, m32r, metag, microblaze, mn10300, sh. Credit: My name sits rather uneasily at the top of this patch. The v1 and v2 versions of the patch were written by Rob Clark and to produce v4 I mostly copied code from Russell King and H. Peter Anvin. However I have mangled the patch sufficiently that *blame* is rightfully mine even if credit should more widely shared. Changelog: v5: updated to use the ret macro (requested by Russell King) v4: remove an inlined add on big endian systems (spotted by Russell King), used __ARMEB__ rather than BIG_ENDIAN (to match rest of file), cleared r3 on EFAULT during __get_user_8. v3: fix a couple of checkpatch issues v2: pass correct size to check_uaccess, and better handling of narrowing double word read with __get_user_xb() (Russell King's suggestion) v1: original Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-10 20:58:08 +01:00
case 8: \
if (sizeof((x)) < 8) \
2014-09-04 06:07:33 +01:00
__get_user_x_32t(__r2, __p, __e, __l, 4); \
ARM: 8091/2: add get_user() support for 8 byte types Recent contributions, including to DRM and binder, introduce 64-bit values in their interfaces. A common motivation for this is to allow the same ABI for 32- and 64-bit userspaces (and therefore also a shared ABI for 32/64 hybrid userspaces). Anyhow, the developers would like to avoid gotchas like having to use copy_from_user(). This feature is already implemented on x86-32 and the majority of other 32-bit architectures. The current list of get_user_8 hold out architectures are: arm, avr32, blackfin, m32r, metag, microblaze, mn10300, sh. Credit: My name sits rather uneasily at the top of this patch. The v1 and v2 versions of the patch were written by Rob Clark and to produce v4 I mostly copied code from Russell King and H. Peter Anvin. However I have mangled the patch sufficiently that *blame* is rightfully mine even if credit should more widely shared. Changelog: v5: updated to use the ret macro (requested by Russell King) v4: remove an inlined add on big endian systems (spotted by Russell King), used __ARMEB__ rather than BIG_ENDIAN (to match rest of file), cleared r3 on EFAULT during __get_user_8. v3: fix a couple of checkpatch issues v2: pass correct size to check_uaccess, and better handling of narrowing double word read with __get_user_xb() (Russell King's suggestion) v1: original Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-10 20:58:08 +01:00
else \
__get_user_x(__r2, __p, __e, __l, 8); \
break; \
default: __e = __get_user_bad(); break; \
} \
ARM: 9132/1: Fix __get_user_check failure with ARM KASAN images ARM: kasan: Fix __get_user_check failure with kasan In macro __get_user_check defined in arch/arm/include/asm/uaccess.h, error code is store in register int __e(r0). When kasan is enabled, assigning value to kernel address might trigger kasan check, which unexpectedly overwrites r0 and causes undefined behavior on arm kasan images. One example is failure in do_futex and results in process soft lockup. Log: watchdog: BUG: soft lockup - CPU#0 stuck for 62946ms! [rs:main Q:Reg:1151] ... (__asan_store4) from (futex_wait_setup+0xf8/0x2b4) (futex_wait_setup) from (futex_wait+0x138/0x394) (futex_wait) from (do_futex+0x164/0xe40) (do_futex) from (sys_futex_time32+0x178/0x230) (sys_futex_time32) from (ret_fast_syscall+0x0/0x50) The soft lockup happens in function futex_wait_setup. The reason is function get_futex_value_locked always return EINVAL, thus pc jump back to retry label and causes looping. This line in function get_futex_value_locked ret = __get_user(*dest, from); is expanded to *dest = (typeof(*(p))) __r2; , in macro __get_user_check. Writing to pointer dest triggers kasan check and overwrites the return value of __get_user_x function. The assembly code of get_futex_value_locked in kernel/futex.c: ... c01f6dc8: eb0b020e bl c04b7608 <__get_user_4> // "x = (typeof(*(p))) __r2;" triggers kasan check and r0 is overwritten c01f6dCc: e1a00007 mov r0, r7 c01f6dd0: e1a05002 mov r5, r2 c01f6dd4: eb04f1e6 bl c0333574 <__asan_store4> c01f6dd8: e5875000 str r5, [r7] // save ret value of __get_user(*dest, from), which is dest address now c01f6ddc: e1a05000 mov r5, r0 ... // checking return value of __get_user failed c01f6e00: e3550000 cmp r5, #0 ... c01f6e0c: 01a00005 moveq r0, r5 // assign return value to EINVAL c01f6e10: 13e0000d mvnne r0, #13 Return value is the destination address of get_user thus certainly non-zero, so get_futex_value_locked always return EINVAL. Fix it by using a tmp vairable to store the error code before the assignment. This fix has no effects to non-kasan images thanks to compiler optimization. It only affects cases that overwrite r0 due to kasan check. This should fix bug discussed in Link: [1] https://lore.kernel.org/linux-arm-kernel/0ef7c2a5-5d8b-c5e0-63fa-31693fd4495c@gmail.com/ Fixes: 421015713b30 ("ARM: 9017/2: Enable KASan for ARM") Signed-off-by: Lexi Shao <shaolexi@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-09-23 03:41:25 +01:00
__tmp_e = __e; \
uaccess_restore(__ua_flags); \
x = (typeof(*(p))) __r2; \
ARM: 9132/1: Fix __get_user_check failure with ARM KASAN images ARM: kasan: Fix __get_user_check failure with kasan In macro __get_user_check defined in arch/arm/include/asm/uaccess.h, error code is store in register int __e(r0). When kasan is enabled, assigning value to kernel address might trigger kasan check, which unexpectedly overwrites r0 and causes undefined behavior on arm kasan images. One example is failure in do_futex and results in process soft lockup. Log: watchdog: BUG: soft lockup - CPU#0 stuck for 62946ms! [rs:main Q:Reg:1151] ... (__asan_store4) from (futex_wait_setup+0xf8/0x2b4) (futex_wait_setup) from (futex_wait+0x138/0x394) (futex_wait) from (do_futex+0x164/0xe40) (do_futex) from (sys_futex_time32+0x178/0x230) (sys_futex_time32) from (ret_fast_syscall+0x0/0x50) The soft lockup happens in function futex_wait_setup. The reason is function get_futex_value_locked always return EINVAL, thus pc jump back to retry label and causes looping. This line in function get_futex_value_locked ret = __get_user(*dest, from); is expanded to *dest = (typeof(*(p))) __r2; , in macro __get_user_check. Writing to pointer dest triggers kasan check and overwrites the return value of __get_user_x function. The assembly code of get_futex_value_locked in kernel/futex.c: ... c01f6dc8: eb0b020e bl c04b7608 <__get_user_4> // "x = (typeof(*(p))) __r2;" triggers kasan check and r0 is overwritten c01f6dCc: e1a00007 mov r0, r7 c01f6dd0: e1a05002 mov r5, r2 c01f6dd4: eb04f1e6 bl c0333574 <__asan_store4> c01f6dd8: e5875000 str r5, [r7] // save ret value of __get_user(*dest, from), which is dest address now c01f6ddc: e1a05000 mov r5, r0 ... // checking return value of __get_user failed c01f6e00: e3550000 cmp r5, #0 ... c01f6e0c: 01a00005 moveq r0, r5 // assign return value to EINVAL c01f6e10: 13e0000d mvnne r0, #13 Return value is the destination address of get_user thus certainly non-zero, so get_futex_value_locked always return EINVAL. Fix it by using a tmp vairable to store the error code before the assignment. This fix has no effects to non-kasan images thanks to compiler optimization. It only affects cases that overwrite r0 due to kasan check. This should fix bug discussed in Link: [1] https://lore.kernel.org/linux-arm-kernel/0ef7c2a5-5d8b-c5e0-63fa-31693fd4495c@gmail.com/ Fixes: 421015713b30 ("ARM: 9017/2: Enable KASan for ARM") Signed-off-by: Lexi Shao <shaolexi@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-09-23 03:41:25 +01:00
__tmp_e; \
})
#define get_user(x, p) \
({ \
might_fault(); \
__get_user_check(x, p); \
})
extern int __put_user_1(void *, unsigned int);
extern int __put_user_2(void *, unsigned int);
extern int __put_user_4(void *, unsigned int);
extern int __put_user_8(void *, unsigned long long);
#define __put_user_check(__pu_val, __ptr, __err, __s) \
({ \
unsigned long __limit = TASK_SIZE - 1; \
register typeof(__pu_val) __r2 asm("r2") = __pu_val; \
register const void __user *__p asm("r0") = __ptr; \
register unsigned long __l asm("r1") = __limit; \
register int __e asm("r0"); \
__asm__ __volatile__ ( \
__asmeq("%0", "r0") __asmeq("%2", "r2") \
__asmeq("%3", "r1") \
"bl __put_user_" #__s \
: "=&r" (__e) \
: "0" (__p), "r" (__r2), "r" (__l) \
: "ip", "lr", "cc"); \
__err = __e; \
})
#else /* CONFIG_MMU */
#define get_user(x, p) __get_user(x, p)
#define __put_user_check __put_user_nocheck
#endif /* CONFIG_MMU */
2022-02-15 17:55:04 +01:00
#include <asm-generic/access_ok.h>
#ifdef CONFIG_CPU_SPECTRE
/*
* When mitigating Spectre variant 1, it is not worth fixing the non-
* verifying accessors, because we need to add verification of the
* address space there. Force these to use the standard get_user()
* version instead.
*/
#define __get_user(x, ptr) get_user(x, ptr)
#else
/*
* The "__xxx" versions of the user access functions do not verify the
* address space - it must have been done previously with a separate
* "access_ok()" call.
*
* The "xxx_error" versions set the third argument to EFAULT if an
* error occurs, and leave it unchanged on success. Note that these
* versions are void (ie, don't return a value as such).
*/
#define __get_user(x, ptr) \
({ \
long __gu_err = 0; \
__get_user_err((x), (ptr), __gu_err, TUSER()); \
__gu_err; \
})
#define __get_user_err(x, ptr, err, __t) \
do { \
unsigned long __gu_addr = (unsigned long)(ptr); \
unsigned long __gu_val; \
unsigned int __ua_flags; \
__chk_user_ptr(ptr); \
might_fault(); \
__ua_flags = uaccess_save_and_enable(); \
switch (sizeof(*(ptr))) { \
case 1: __get_user_asm_byte(__gu_val, __gu_addr, err, __t); break; \
case 2: __get_user_asm_half(__gu_val, __gu_addr, err, __t); break; \
case 4: __get_user_asm_word(__gu_val, __gu_addr, err, __t); break; \
default: (__gu_val) = __get_user_bad(); \
} \
uaccess_restore(__ua_flags); \
(x) = (__typeof__(*(ptr)))__gu_val; \
} while (0)
#endif
#define __get_user_asm(x, addr, err, instr) \
__asm__ __volatile__( \
"1: " instr " %1, [%2], #0\n" \
"2:\n" \
" .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \
"3: mov %0, %3\n" \
" mov %1, #0\n" \
" b 2b\n" \
" .popsection\n" \
" .pushsection __ex_table,\"a\"\n" \
" .align 3\n" \
" .long 1b, 3b\n" \
" .popsection" \
: "+r" (err), "=&r" (x) \
: "r" (addr), "i" (-EFAULT) \
: "cc")
#define __get_user_asm_byte(x, addr, err, __t) \
__get_user_asm(x, addr, err, "ldrb" __t)
#if __LINUX_ARM_ARCH__ >= 6
#define __get_user_asm_half(x, addr, err, __t) \
__get_user_asm(x, addr, err, "ldrh" __t)
#else
#ifndef __ARMEB__
#define __get_user_asm_half(x, __gu_addr, err, __t) \
({ \
unsigned long __b1, __b2; \
__get_user_asm_byte(__b1, __gu_addr, err, __t); \
__get_user_asm_byte(__b2, __gu_addr + 1, err, __t); \
(x) = __b1 | (__b2 << 8); \
})
#else
#define __get_user_asm_half(x, __gu_addr, err, __t) \
({ \
unsigned long __b1, __b2; \
__get_user_asm_byte(__b1, __gu_addr, err, __t); \
__get_user_asm_byte(__b2, __gu_addr + 1, err, __t); \
(x) = (__b1 << 8) | __b2; \
})
#endif
#endif /* __LINUX_ARM_ARCH__ >= 6 */
#define __get_user_asm_word(x, addr, err, __t) \
__get_user_asm(x, addr, err, "ldr" __t)
#define __put_user_switch(x, ptr, __err, __fn) \
do { \
const __typeof__(*(ptr)) __user *__pu_ptr = (ptr); \
__typeof__(*(ptr)) __pu_val = (x); \
unsigned int __ua_flags; \
might_fault(); \
__ua_flags = uaccess_save_and_enable(); \
switch (sizeof(*(ptr))) { \
case 1: __fn(__pu_val, __pu_ptr, __err, 1); break; \
case 2: __fn(__pu_val, __pu_ptr, __err, 2); break; \
case 4: __fn(__pu_val, __pu_ptr, __err, 4); break; \
case 8: __fn(__pu_val, __pu_ptr, __err, 8); break; \
default: __err = __put_user_bad(); break; \
} \
uaccess_restore(__ua_flags); \
} while (0)
#define put_user(x, ptr) \
({ \
int __pu_err = 0; \
__put_user_switch((x), (ptr), __pu_err, __put_user_check); \
__pu_err; \
})
#ifdef CONFIG_CPU_SPECTRE
/*
* When mitigating Spectre variant 1.1, all accessors need to include
* verification of the address space.
*/
#define __put_user(x, ptr) put_user(x, ptr)
#else
#define __put_user(x, ptr) \
({ \
long __pu_err = 0; \
__put_user_switch((x), (ptr), __pu_err, __put_user_nocheck); \
__pu_err; \
})
#define __put_user_nocheck(x, __pu_ptr, __err, __size) \
do { \
unsigned long __pu_addr = (unsigned long)__pu_ptr; \
__put_user_nocheck_##__size(x, __pu_addr, __err, TUSER());\
} while (0)
#define __put_user_nocheck_1 __put_user_asm_byte
#define __put_user_nocheck_2 __put_user_asm_half
#define __put_user_nocheck_4 __put_user_asm_word
#define __put_user_nocheck_8 __put_user_asm_dword
#endif /* !CONFIG_CPU_SPECTRE */
#define __put_user_asm(x, __pu_addr, err, instr) \
__asm__ __volatile__( \
"1: " instr " %1, [%2], #0\n" \
"2:\n" \
" .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \
"3: mov %0, %3\n" \
" b 2b\n" \
" .popsection\n" \
" .pushsection __ex_table,\"a\"\n" \
" .align 3\n" \
" .long 1b, 3b\n" \
" .popsection" \
: "+r" (err) \
: "r" (x), "r" (__pu_addr), "i" (-EFAULT) \
: "cc")
#define __put_user_asm_byte(x, __pu_addr, err, __t) \
__put_user_asm(x, __pu_addr, err, "strb" __t)
#if __LINUX_ARM_ARCH__ >= 6
#define __put_user_asm_half(x, __pu_addr, err, __t) \
__put_user_asm(x, __pu_addr, err, "strh" __t)
#else
#ifndef __ARMEB__
#define __put_user_asm_half(x, __pu_addr, err, __t) \
({ \
unsigned long __temp = (__force unsigned long)(x); \
__put_user_asm_byte(__temp, __pu_addr, err, __t); \
__put_user_asm_byte(__temp >> 8, __pu_addr + 1, err, __t);\
})
#else
#define __put_user_asm_half(x, __pu_addr, err, __t) \
({ \
unsigned long __temp = (__force unsigned long)(x); \
__put_user_asm_byte(__temp >> 8, __pu_addr, err, __t); \
__put_user_asm_byte(__temp, __pu_addr + 1, err, __t); \
})
#endif
#endif /* __LINUX_ARM_ARCH__ >= 6 */
#define __put_user_asm_word(x, __pu_addr, err, __t) \
__put_user_asm(x, __pu_addr, err, "str" __t)
#ifndef __ARMEB__
#define __reg_oper0 "%R2"
#define __reg_oper1 "%Q2"
#else
#define __reg_oper0 "%Q2"
#define __reg_oper1 "%R2"
#endif
#define __put_user_asm_dword(x, __pu_addr, err, __t) \
__asm__ __volatile__( \
ARM( "1: str" __t " " __reg_oper1 ", [%1], #4\n" ) \
ARM( "2: str" __t " " __reg_oper0 ", [%1]\n" ) \
THUMB( "1: str" __t " " __reg_oper1 ", [%1]\n" ) \
THUMB( "2: str" __t " " __reg_oper0 ", [%1, #4]\n" ) \
"3:\n" \
" .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \
"4: mov %0, %3\n" \
" b 3b\n" \
" .popsection\n" \
" .pushsection __ex_table,\"a\"\n" \
" .align 3\n" \
" .long 1b, 4b\n" \
" .long 2b, 4b\n" \
" .popsection" \
: "+r" (err), "+r" (__pu_addr) \
: "r" (x), "i" (-EFAULT) \
: "cc")
#define __get_kernel_nofault(dst, src, type, err_label) \
do { \
const type *__pk_ptr = (src); \
unsigned long __src = (unsigned long)(__pk_ptr); \
type __val; \
int __err = 0; \
switch (sizeof(type)) { \
case 1: __get_user_asm_byte(__val, __src, __err, ""); break; \
case 2: __get_user_asm_half(__val, __src, __err, ""); break; \
case 4: __get_user_asm_word(__val, __src, __err, ""); break; \
case 8: { \
u32 *__v32 = (u32*)&__val; \
__get_user_asm_word(__v32[0], __src, __err, ""); \
if (__err) \
break; \
__get_user_asm_word(__v32[1], __src+4, __err, ""); \
break; \
} \
default: __err = __get_user_bad(); break; \
} \
if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) \
put_unaligned(__val, (type *)(dst)); \
else \
*(type *)(dst) = __val; /* aligned by caller */ \
if (__err) \
goto err_label; \
} while (0)
#define __put_kernel_nofault(dst, src, type, err_label) \
do { \
const type *__pk_ptr = (dst); \
unsigned long __dst = (unsigned long)__pk_ptr; \
int __err = 0; \
type __val = IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) \
? get_unaligned((type *)(src)) \
: *(type *)(src); /* aligned by caller */ \
switch (sizeof(type)) { \
case 1: __put_user_asm_byte(__val, __dst, __err, ""); break; \
case 2: __put_user_asm_half(__val, __dst, __err, ""); break; \
case 4: __put_user_asm_word(__val, __dst, __err, ""); break; \
case 8: __put_user_asm_dword(__val, __dst, __err, ""); break; \
default: __err = __put_user_bad(); break; \
} \
if (__err) \
goto err_label; \
} while (0)
#ifdef CONFIG_MMU
extern unsigned long __must_check
arm_copy_from_user(void *to, const void __user *from, unsigned long n);
static inline unsigned long __must_check
raw_copy_from_user(void *to, const void __user *from, unsigned long n)
{
unsigned int __ua_flags;
__ua_flags = uaccess_save_and_enable();
n = arm_copy_from_user(to, from, n);
uaccess_restore(__ua_flags);
return n;
}
extern unsigned long __must_check
arm_copy_to_user(void __user *to, const void *from, unsigned long n);
extern unsigned long __must_check
__copy_to_user_std(void __user *to, const void *from, unsigned long n);
static inline unsigned long __must_check
raw_copy_to_user(void __user *to, const void *from, unsigned long n)
{
#ifndef CONFIG_UACCESS_WITH_MEMCPY
unsigned int __ua_flags;
__ua_flags = uaccess_save_and_enable();
n = arm_copy_to_user(to, from, n);
uaccess_restore(__ua_flags);
return n;
#else
return arm_copy_to_user(to, from, n);
#endif
}
extern unsigned long __must_check
arm_clear_user(void __user *addr, unsigned long n);
extern unsigned long __must_check
__clear_user_std(void __user *addr, unsigned long n);
static inline unsigned long __must_check
__clear_user(void __user *addr, unsigned long n)
{
unsigned int __ua_flags = uaccess_save_and_enable();
n = arm_clear_user(addr, n);
uaccess_restore(__ua_flags);
return n;
}
#else
static inline unsigned long
raw_copy_from_user(void *to, const void __user *from, unsigned long n)
{
memcpy(to, (const void __force *)from, n);
return 0;
}
static inline unsigned long
raw_copy_to_user(void __user *to, const void *from, unsigned long n)
{
memcpy((void __force *)to, from, n);
return 0;
}
#define __clear_user(addr, n) (memset((void __force *)addr, 0, n), 0)
#endif
#define INLINE_COPY_TO_USER
#define INLINE_COPY_FROM_USER
static inline unsigned long __must_check clear_user(void __user *to, unsigned long n)
{
Remove 'type' argument from access_ok() function Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand. It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact. A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all. This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form. There were a couple of notable cases: - csky still had the old "verify_area()" name as an alias. - the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it) - microblaze used the type argument for a debug printout but other than those oddities this should be a total no-op patch. I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-03 18:57:57 -08:00
if (access_ok(to, n))
n = __clear_user(to, n);
return n;
}
/* These are from lib/ code, and use __get_user() and friends */
extern long strncpy_from_user(char *dest, const char __user *src, long count);
extern __must_check long strnlen_user(const char __user *str, long n);
#endif /* _ASMARM_UACCESS_H */