linux/kernel/locking
Yuyang Du 01bb6f0af9 locking/lockdep: Change the range of class_idx in held_lock struct
held_lock->class_idx is used to point to the class of the held lock. The
index is shifted by 1 to make index 0 mean no class, which results in class
index shifting back and forth but is not worth doing so.

The reason is: (1) there will be no "no-class" held_lock to begin with, and
(2) index 0 seems to be used for error checking, but if something wrong
indeed happened, the index can't be counted on to distinguish it as that
something won't set the class_idx to 0 on purpose to tell us it is wrong.

Therefore, change the index to start from 0. This saves a lot of
back-and-forth shifts and a class slot back to lock_classes.

Since index 0 is now used for lock class, we change the initial chain key to
-1 to avoid key collision, which is due to the fact that __jhash_mix(0, 0, 0) = 0.
Actually, the initial chain key can be any arbitrary value other than 0.

In addition, a bitmap is maintained to keep track of the used lock classes,
and we check the validity of the held lock against that bitmap.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bvanassche@acm.org
Cc: frederic@kernel.org
Cc: ming.lei@redhat.com
Cc: will.deacon@arm.com
Link: https://lkml.kernel.org/r/20190506081939.74287-10-duyuyang@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03 11:55:43 +02:00
..
lock_events.c locking/lock_events: Don't show pvqspinlock events on bare metal 2019-04-10 10:56:05 +02:00
lock_events.h locking/lock_events: Use this_cpu_add() when necessary 2019-05-24 14:17:18 -07:00
lock_events_list.h locking/rwsem: Enable lock event counting 2019-04-10 10:56:06 +02:00
lockdep.c locking/lockdep: Change the range of class_idx in held_lock struct 2019-06-03 11:55:43 +02:00
lockdep_internals.h locking/lockdep: Test all incompatible scenarios at once in check_irq_usage() 2019-04-29 08:29:20 +02:00
lockdep_proc.c locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count() 2019-02-28 07:55:44 +01:00
lockdep_states.h
locktorture.c locktorture: NULL cxt.lwsa and cxt.lrsa to allow bad-arg detection 2019-03-26 14:42:53 -07:00
Makefile locking/lock_events: Make lock_events available for all archs & other locks 2019-04-10 10:56:04 +02:00
mcs_spinlock.h
mutex-debug.c locking/mutex: Replace spin_is_locked() with lockdep 2018-11-12 09:06:22 -08:00
mutex-debug.h
mutex.c treewide: Add SPDX license identifier for missed files 2019-05-21 10:50:45 +02:00
mutex.h
osq_lock.c
percpu-rwsem.c treewide: Add SPDX license identifier for missed files 2019-05-21 10:50:45 +02:00
qrwlock.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157 2019-05-30 11:26:37 -07:00
qspinlock.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157 2019-05-30 11:26:37 -07:00
qspinlock_paravirt.h locking/qspinlock_stat: Introduce generic lockevent_*() counting APIs 2019-04-10 10:56:03 +02:00
qspinlock_stat.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157 2019-05-30 11:26:37 -07:00
rtmutex-debug.c
rtmutex-debug.h
rtmutex.c treewide: Add SPDX license identifier for missed files 2019-05-21 10:50:45 +02:00
rtmutex.h
rtmutex_common.h
rwsem-xadd.c locking/rwsem: Prevent decrement of reader count before increment 2019-05-07 08:46:46 +02:00
rwsem.c locking/rwsem: Enhance DEBUG_RWSEMS_WARN_ON() macro 2019-04-10 10:56:03 +02:00
rwsem.h locking/rwsem: Prevent unneeded warning during locking selftest 2019-04-14 11:09:35 +02:00
semaphore.c
spinlock.c asm-generic/mmiowb: Add generic implementation of mmiowb() tracking 2019-04-08 11:59:39 +01:00
spinlock_debug.c mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors 2019-04-08 11:59:47 +01:00
test-ww_mutex.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 9 2019-05-21 11:28:40 +02:00