linux/mm
Linus Torvalds ca1a46d6f5 slab changes for 5.17
-----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEjUuTAak14xi+SF7M4CHKc/GJqRAFAmHYFIIACgkQ4CHKc/GJ
 qRBXqwf+JrWc3PCRF4xKeYmi367RgSX9D8kFCcAry1F+iuq1ssqlDBy/vEp1KtXE
 t2Xyn6PILgzGcYdK1/CVNigwAom2NRcb8fHamjjopqYk8wor9m46I564Z6ItVg2I
 SCcWhHEuD7M66tmBS+oex3n+LOZ4jPUPhkn5KH04/LSTrR5dzn1op6CnFbpOUZn1
 Uy9qB6EbjuyhsONHnO/CdoRUU07K+KqEkzolXFCqpI2Vqf+VBvAwi+RpDLfKkr6l
 Vp4PT03ixVsOWhGaJcf7hijKCRyfhsLp7Zyg33pzwpXyngqrowwUPVDMKPyqBy6O
 ktehRk+cOQiAi7KnpECljof+NR15Qg==
 =/Nyj
 -----END PGP SIGNATURE-----

Merge tag 'slab-for-5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab

Pull slab updates from Vlastimil Babka:

 - Separate struct slab from struct page - an offshot of the page folio
   work.

   Struct page fields used by slab allocators are moved from struct page
   to a new struct slab, that uses the same physical storage. Similar to
   struct folio, it always is a head page. This brings better type
   safety, separation of large kmalloc allocations from true slabs, and
   cleanup of related objcg code.

 - A SLAB_MERGE_DEFAULT config optimization.

* tag 'slab-for-5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: (33 commits)
  mm/slob: Remove unnecessary page_mapcount_reset() function call
  bootmem: Use page->index instead of page->freelist
  zsmalloc: Stop using slab fields in struct page
  mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
  mm/slub: Simplify struct slab slabs field definition
  mm/sl*b: Differentiate struct slab fields by sl*b implementations
  mm/kfence: Convert kfence_guarded_alloc() to struct slab
  mm/kasan: Convert to struct folio and struct slab
  mm/slob: Convert SLOB to use struct slab and struct folio
  mm/memcg: Convert slab objcgs from struct page to struct slab
  mm: Convert struct page to struct slab in functions used by other subsystems
  mm/slab: Finish struct page to struct slab conversion
  mm/slab: Convert most struct page to struct slab by spatch
  mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
  mm/slub: Finish struct page to struct slab conversion
  mm/slub: Convert most struct page to struct slab by spatch
  mm/slub: Convert pfmemalloc_match() to take a struct slab
  mm/slub: Convert __free_slab() to use struct slab
  mm/slub: Convert alloc_slab_page() to return a struct slab
  mm/slub: Convert print_page_info() to print_slab_info()
  ...
2022-01-10 11:58:12 -08:00
..
damon mm/damon/dbgfs: fix 'struct pid' leaks in 'dbgfs_target_ids_write()' 2021-12-31 09:20:12 -08:00
kasan mm/kasan: Convert to struct folio and struct slab 2022-01-06 12:26:14 +01:00
kfence slab changes for 5.17 2022-01-10 11:58:12 -08:00
backing-dev.c mm: bdi: initialize bdi_min_ratio when bdi is unregistered 2021-12-10 17:10:56 -08:00
balloon_compaction.c
bootmem_info.c bootmem: Use page->index instead of page->freelist 2022-01-06 12:27:03 +01:00
cleancache.c
cma.c
cma.h
cma_debug.c
cma_sysfs.c
compaction.c
debug.c
debug_page_ref.c
debug_vm_pgtable.c
dmapool.c
early_ioremap.c
fadvise.c
failslab.c
filemap.c
folio-compat.c
frontswap.c
gup.c
gup_test.c
gup_test.h
highmem.c
hmm.c
huge_memory.c
hugetlb.c hugetlbfs: fix issue of preallocation of gigantic pages can't work 2021-12-10 17:10:56 -08:00
hugetlb_cgroup.c
hugetlb_vmemmap.c
hugetlb_vmemmap.h
hwpoison-inject.c
init-mm.c
internal.h
interval_tree.c
io-mapping.c
ioremap.c
Kconfig
Kconfig.debug
khugepaged.c
kmemleak.c
ksm.c
list_lru.c
maccess.c
madvise.c
Makefile
mapping_dirty_helpers.c
memblock.c
memcontrol.c mm/memcg: Convert slab objcgs from struct page to struct slab 2022-01-06 12:26:14 +01:00
memfd.c
memory-failure.c - Add support for handling hw errors in SGX pages: poisoning, recovering 2022-01-10 09:44:09 -08:00
memory.c
memory_hotplug.c
mempolicy.c mm: mempolicy: fix THP allocations escaping mempolicy restrictions 2021-12-25 12:20:55 -08:00
mempool.c
memremap.c
memtest.c
migrate.c
mincore.c
mlock.c
mm_init.c
mmap.c
mmap_lock.c
mmu_gather.c
mmu_notifier.c
mmzone.c
mprotect.c
mremap.c
msync.c
nommu.c
oom_kill.c
page-writeback.c
page_alloc.c
page_counter.c
page_ext.c
page_idle.c
page_io.c
page_isolation.c
page_owner.c
page_poison.c
page_reporting.c
page_reporting.h
page_vma_mapped.c
pagewalk.c
percpu-internal.h
percpu-km.c
percpu-stats.c
percpu-vm.c
percpu.c
pgalloc-track.h
pgtable-generic.c
process_vm_access.c
ptdump.c
readahead.c
rmap.c
rodata_test.c
secretmem.c
shmem.c
shuffle.c
shuffle.h
slab.c mm/kasan: Convert to struct folio and struct slab 2022-01-06 12:26:14 +01:00
slab.h mm/slob: Remove unnecessary page_mapcount_reset() function call 2022-01-06 12:27:28 +01:00
slab_common.c mm: Use struct slab in kmem_obj_info() 2022-01-06 12:25:51 +01:00
slob.c mm/slob: Remove unnecessary page_mapcount_reset() function call 2022-01-06 12:27:28 +01:00
slub.c mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled 2022-01-06 12:26:53 +01:00
sparse-vmemmap.c
sparse.c bootmem: Use page->index instead of page->freelist 2022-01-06 12:27:03 +01:00
swap.c
swap_cgroup.c
swap_slots.c
swap_state.c
swapfile.c
truncate.c
usercopy.c mm: Convert check_heap_object() to use struct slab 2022-01-06 12:25:51 +01:00
userfaultfd.c
util.c
vmacache.c
vmalloc.c
vmpressure.c
vmscan.c mm: vmscan: reduce throttling due to a failure to make progress -fix 2021-12-31 13:12:55 -08:00
vmstat.c
workingset.c
z3fold.c
zbud.c
zpool.c
zsmalloc.c zsmalloc: Stop using slab fields in struct page 2022-01-06 12:27:03 +01:00
zswap.c