mirror of
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
synced 2025-11-27 01:11:31 +00:00
The break_lock data structure and code for spinlocks is quite nasty. Not only does it double the size of a spinlock but it changes locking to a potentially less optimal trylock. Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a __raw_spin_is_contended that uses the lock data itself to determine whether there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is not set. Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to decouple it from the spinlock implementation, and make it typesafe (rwlocks do not have any need_lockbreak sites -- why do they even get bloated up with that break_lock then?). Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> |
||
|---|---|---|
| .. | ||
| allocpercpu.c | ||
| backing-dev.c | ||
| bootmem.c | ||
| bounce.c | ||
| fadvise.c | ||
| filemap.c | ||
| filemap_xip.c | ||
| fremap.c | ||
| highmem.c | ||
| hugetlb.c | ||
| internal.h | ||
| Kconfig | ||
| madvise.c | ||
| Makefile | ||
| memory.c | ||
| memory_hotplug.c | ||
| mempolicy.c | ||
| mempool.c | ||
| migrate.c | ||
| mincore.c | ||
| mlock.c | ||
| mmap.c | ||
| mmzone.c | ||
| mprotect.c | ||
| mremap.c | ||
| msync.c | ||
| nommu.c | ||
| oom_kill.c | ||
| page-writeback.c | ||
| page_alloc.c | ||
| page_io.c | ||
| page_isolation.c | ||
| pdflush.c | ||
| prio_tree.c | ||
| quicklist.c | ||
| readahead.c | ||
| rmap.c | ||
| shmem.c | ||
| shmem_acl.c | ||
| slab.c | ||
| slob.c | ||
| slub.c | ||
| sparse-vmemmap.c | ||
| sparse.c | ||
| swap.c | ||
| swap_state.c | ||
| swapfile.c | ||
| thrash.c | ||
| tiny-shmem.c | ||
| truncate.c | ||
| util.c | ||
| vmalloc.c | ||
| vmscan.c | ||
| vmstat.c | ||