2020-07-24 17:50:47 -06:00
|
|
|
// Copyright 2020 The Go Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style
|
|
|
|
// license that can be found in the LICENSE file.
|
|
|
|
|
2021-02-19 16:35:10 -07:00
|
|
|
//go:build goexperiment.staticlockranking
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
|
|
|
|
package runtime
|
|
|
|
|
|
|
|
import (
|
2024-01-31 19:21:14 -07:00
|
|
|
"internal/runtime/atomic"
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
"unsafe"
|
|
|
|
)
|
|
|
|
|
2023-04-23 13:25:47 -06:00
|
|
|
const staticLockRanking = true
|
|
|
|
|
2020-10-28 16:06:05 -06:00
|
|
|
// worldIsStopped is accessed atomically to track world-stops. 1 == world
|
|
|
|
// stopped.
|
2022-08-25 20:41:32 -06:00
|
|
|
var worldIsStopped atomic.Uint32
|
2020-10-28 16:06:05 -06:00
|
|
|
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
// lockRankStruct is embedded in mutex
|
|
|
|
type lockRankStruct struct {
|
|
|
|
// static lock ranking of the lock
|
|
|
|
rank lockRank
|
|
|
|
// pad field to make sure lockRankStruct is a multiple of 8 bytes, even on
|
|
|
|
// 32-bit systems.
|
|
|
|
pad int
|
|
|
|
}
|
|
|
|
|
runtime: generate the lock ranking from a DAG description
Currently, the runtime lock rank graph is maintained manually in a
large set of arrays that give the partial order and a manual
topological sort of this partial order. Any changes to the rank graph
are difficult to reason about and hard to review, as well as likely to
cause merge conflicts. Furthermore, because the partial order is
manually maintained, it's not actually transitively closed (though
it's close), meaning there are many cases where rank a can be acquired
before b and b before c, but a cannot be acquired before c. While this
isn't technically wrong, it's very strange in the context of lock
ordering.
Replace all of this with a much more compact, readable, and
maintainable description of the rank graph written in the internal/dag
graph language. We statically generate the runtime structures from
this description, which has the advantage that the parser doesn't have
to run during runtime initialization and the structures can live in
static data where they can be accessed from any point during runtime
init.
The current description was automatically generated from the existing
partial order, combined with a transitive reduction. This ensures it's
correct, but it could use some manual messaging to call out the
logical layers and add some structure.
We do lose the ad hoc string names of the lock ranks in this
translation, which could mostly be derived from the rank constant
names, but not always. I may bring those back but in a more uniform
way.
We no longer need the tests in lockrank_test.go because they were
checking that we manually maintained the structures correctly.
Fixes #53789.
Change-Id: I54451d561b22e61150aff7e9b8602ba9737e1b9b
Reviewed-on: https://go-review.googlesource.com/c/go/+/418715
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-07-19 15:31:52 -06:00
|
|
|
// lockInit(l *mutex, rank int) sets the rank of lock before it is used.
|
|
|
|
// If there is no clear place to initialize a lock, then the rank of a lock can be
|
|
|
|
// specified during the lock call itself via lockWithRank(l *mutex, rank int).
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
func lockInit(l *mutex, rank lockRank) {
|
|
|
|
l.rank = rank
|
|
|
|
}
|
|
|
|
|
|
|
|
func getLockRank(l *mutex) lockRank {
|
|
|
|
return l.rank
|
|
|
|
}
|
|
|
|
|
|
|
|
// lockWithRank is like lock(l), but allows the caller to specify a lock rank
|
|
|
|
// when acquiring a non-static lock.
|
runtime: drop nosplit from primary lockrank functions
acquireLockRank and releaseLockRank are called from nosplit context, and
thus must be nosplit.
lockWithRank, unlockWithRank, and lockWithRankMayAcquire are called from
spittable context, and thus don't strictly need to be nosplit.
The stated reasoning for making these functions nosplit is to avoid
re-entrant calls due to a stack split on function entry taking a lock.
There are two potential issues at play here:
1. A stack split on function entry adds a new lock ordering edge before
we (a) take lock l, or (b) release lock l.
2. A stack split in a child call (such as to lock2) introduces a new
lock ordering edge _in the wrong order_ because e.g., in the case of
lockWithRank, we've noted that l is taken, but the stack split in
lock2 actually takes stack split locks _before_ l is actually locked.
(1) is indeed avoided by marking these functions nosplit, but this is
really just a bit of duct tape that generally has no effect overall. Any
earlier call can have a stack split and introduce the same new edge.
This includes lock/unlock which are not nosplit!
I began this CL as a change to extend nosplit to lock and unlock to try
to make this mitigation more effective, but I've realized that as long
as there is a _single_ nosplit call between a lock and unlock, we can
end up with the edge. There seems to be few enough cases without any
calls that is does not seem worth the extra cognitive load to extend
nosplit throughout all of the locking functions.
(2) is a real issue which would cause incorrect ordering, but it is
already handled by switching to the system stack before recording the
lock ordering. Adding / removing nosplit has no effect on this issue.
Change-Id: I94fbd21b2bf928dbf1bf71aabb6788fc0a012829
Reviewed-on: https://go-review.googlesource.com/c/go/+/254367
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Dan Scales <danscales@google.com>
Trust: Michael Pratt <mpratt@google.com>
2020-09-11 10:14:06 -06:00
|
|
|
//
|
|
|
|
// Note that we need to be careful about stack splits:
|
|
|
|
//
|
|
|
|
// This function is not nosplit, thus it may split at function entry. This may
|
|
|
|
// introduce a new edge in the lock order, but it is no different from any
|
|
|
|
// other (nosplit) call before this call (including the call to lock() itself).
|
|
|
|
//
|
|
|
|
// However, we switch to the systemstack to record the lock held to ensure that
|
|
|
|
// we record an accurate lock ordering. e.g., without systemstack, a stack
|
|
|
|
// split on entry to lock2() would record stack split locks as taken after l,
|
|
|
|
// even though l is not actually locked yet.
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
func lockWithRank(l *mutex, rank lockRank) {
|
2023-04-23 13:25:47 -06:00
|
|
|
if l == &debuglock || l == &paniclk || l == &raceFiniLock {
|
2020-04-15 13:35:24 -06:00
|
|
|
// debuglock is only used for println/printlock(). Don't do lock
|
|
|
|
// rank recording for it, since print/println are used when
|
|
|
|
// printing out a lock ordering problem below.
|
|
|
|
//
|
2020-11-17 10:28:40 -07:00
|
|
|
// paniclk is only used for fatal throw/panic. Don't do lock
|
|
|
|
// ranking recording for it, since we throw after reporting a
|
|
|
|
// lock ordering problem. Additionally, paniclk may be taken
|
|
|
|
// after effectively any lock (anywhere we might panic), which
|
|
|
|
// the partial order doesn't cover.
|
2023-04-23 13:25:47 -06:00
|
|
|
//
|
|
|
|
// raceFiniLock is held while exiting when running
|
|
|
|
// the race detector. Don't do lock rank recording for it,
|
|
|
|
// since we are exiting.
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
lock2(l)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
if rank == 0 {
|
|
|
|
rank = lockRankLeafRank
|
|
|
|
}
|
|
|
|
gp := getg()
|
|
|
|
// Log the new class.
|
|
|
|
systemstack(func() {
|
|
|
|
i := gp.m.locksHeldLen
|
|
|
|
if i >= len(gp.m.locksHeld) {
|
|
|
|
throw("too many locks held concurrently for rank checking")
|
|
|
|
}
|
|
|
|
gp.m.locksHeld[i].rank = rank
|
|
|
|
gp.m.locksHeld[i].lockAddr = uintptr(unsafe.Pointer(l))
|
|
|
|
gp.m.locksHeldLen++
|
|
|
|
|
|
|
|
// i is the index of the lock being acquired
|
|
|
|
if i > 0 {
|
|
|
|
checkRanks(gp, gp.m.locksHeld[i-1].rank, rank)
|
|
|
|
}
|
|
|
|
lock2(l)
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2020-10-30 13:22:52 -06:00
|
|
|
// nosplit to ensure it can be called in as many contexts as possible.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-10-30 13:22:52 -06:00
|
|
|
//go:nosplit
|
2020-08-21 09:49:56 -06:00
|
|
|
func printHeldLocks(gp *g) {
|
|
|
|
if gp.m.locksHeldLen == 0 {
|
|
|
|
println("<none>")
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
for j, held := range gp.m.locksHeld[:gp.m.locksHeldLen] {
|
|
|
|
println(j, ":", held.rank.String(), held.rank, unsafe.Pointer(gp.m.locksHeld[j].lockAddr))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-04-19 16:12:18 -06:00
|
|
|
// acquireLockRankAndM acquires a rank which is not associated with a mutex
|
|
|
|
// lock. To maintain the invariant that an M with m.locks==0 does not hold any
|
|
|
|
// lock-like resources, it also acquires the M.
|
runtime: drop nosplit from primary lockrank functions
acquireLockRank and releaseLockRank are called from nosplit context, and
thus must be nosplit.
lockWithRank, unlockWithRank, and lockWithRankMayAcquire are called from
spittable context, and thus don't strictly need to be nosplit.
The stated reasoning for making these functions nosplit is to avoid
re-entrant calls due to a stack split on function entry taking a lock.
There are two potential issues at play here:
1. A stack split on function entry adds a new lock ordering edge before
we (a) take lock l, or (b) release lock l.
2. A stack split in a child call (such as to lock2) introduces a new
lock ordering edge _in the wrong order_ because e.g., in the case of
lockWithRank, we've noted that l is taken, but the stack split in
lock2 actually takes stack split locks _before_ l is actually locked.
(1) is indeed avoided by marking these functions nosplit, but this is
really just a bit of duct tape that generally has no effect overall. Any
earlier call can have a stack split and introduce the same new edge.
This includes lock/unlock which are not nosplit!
I began this CL as a change to extend nosplit to lock and unlock to try
to make this mitigation more effective, but I've realized that as long
as there is a _single_ nosplit call between a lock and unlock, we can
end up with the edge. There seems to be few enough cases without any
calls that is does not seem worth the extra cognitive load to extend
nosplit throughout all of the locking functions.
(2) is a real issue which would cause incorrect ordering, but it is
already handled by switching to the system stack before recording the
lock ordering. Adding / removing nosplit has no effect on this issue.
Change-Id: I94fbd21b2bf928dbf1bf71aabb6788fc0a012829
Reviewed-on: https://go-review.googlesource.com/c/go/+/254367
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Dan Scales <danscales@google.com>
Trust: Michael Pratt <mpratt@google.com>
2020-09-11 10:14:06 -06:00
|
|
|
//
|
|
|
|
// This function may be called in nosplit context and thus must be nosplit.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-04-15 13:35:24 -06:00
|
|
|
//go:nosplit
|
2024-04-19 16:12:18 -06:00
|
|
|
func acquireLockRankAndM(rank lockRank) {
|
|
|
|
acquirem()
|
|
|
|
|
2020-04-15 13:35:24 -06:00
|
|
|
gp := getg()
|
2020-10-30 13:22:52 -06:00
|
|
|
// Log the new class. See comment on lockWithRank.
|
2020-04-15 13:35:24 -06:00
|
|
|
systemstack(func() {
|
|
|
|
i := gp.m.locksHeldLen
|
|
|
|
if i >= len(gp.m.locksHeld) {
|
|
|
|
throw("too many locks held concurrently for rank checking")
|
|
|
|
}
|
|
|
|
gp.m.locksHeld[i].rank = rank
|
|
|
|
gp.m.locksHeld[i].lockAddr = 0
|
|
|
|
gp.m.locksHeldLen++
|
|
|
|
|
|
|
|
// i is the index of the lock being acquired
|
|
|
|
if i > 0 {
|
|
|
|
checkRanks(gp, gp.m.locksHeld[i-1].rank, rank)
|
|
|
|
}
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
// checkRanks checks if goroutine g, which has mostly recently acquired a lock
|
|
|
|
// with rank 'prevRank', can now acquire a lock with rank 'rank'.
|
2020-08-21 09:49:56 -06:00
|
|
|
//
|
|
|
|
//go:systemstack
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
func checkRanks(gp *g, prevRank, rank lockRank) {
|
|
|
|
rankOK := false
|
2020-04-15 13:35:24 -06:00
|
|
|
if rank < prevRank {
|
|
|
|
// If rank < prevRank, then we definitely have a rank error
|
|
|
|
rankOK = false
|
|
|
|
} else if rank == lockRankLeafRank {
|
|
|
|
// If new lock is a leaf lock, then the preceding lock can
|
|
|
|
// be anything except another leaf lock.
|
|
|
|
rankOK = prevRank < lockRankLeafRank
|
|
|
|
} else {
|
|
|
|
// We've now verified the total lock ranking, but we
|
|
|
|
// also enforce the partial ordering specified by
|
|
|
|
// lockPartialOrder as well. Two locks with the same rank
|
|
|
|
// can only be acquired at the same time if explicitly
|
|
|
|
// listed in the lockPartialOrder table.
|
|
|
|
list := lockPartialOrder[rank]
|
|
|
|
for _, entry := range list {
|
|
|
|
if entry == prevRank {
|
|
|
|
rankOK = true
|
|
|
|
break
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if !rankOK {
|
|
|
|
printlock()
|
|
|
|
println(gp.m.procid, " ======")
|
2020-08-21 09:49:56 -06:00
|
|
|
printHeldLocks(gp)
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
throw("lock ordering problem")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
runtime: drop nosplit from primary lockrank functions
acquireLockRank and releaseLockRank are called from nosplit context, and
thus must be nosplit.
lockWithRank, unlockWithRank, and lockWithRankMayAcquire are called from
spittable context, and thus don't strictly need to be nosplit.
The stated reasoning for making these functions nosplit is to avoid
re-entrant calls due to a stack split on function entry taking a lock.
There are two potential issues at play here:
1. A stack split on function entry adds a new lock ordering edge before
we (a) take lock l, or (b) release lock l.
2. A stack split in a child call (such as to lock2) introduces a new
lock ordering edge _in the wrong order_ because e.g., in the case of
lockWithRank, we've noted that l is taken, but the stack split in
lock2 actually takes stack split locks _before_ l is actually locked.
(1) is indeed avoided by marking these functions nosplit, but this is
really just a bit of duct tape that generally has no effect overall. Any
earlier call can have a stack split and introduce the same new edge.
This includes lock/unlock which are not nosplit!
I began this CL as a change to extend nosplit to lock and unlock to try
to make this mitigation more effective, but I've realized that as long
as there is a _single_ nosplit call between a lock and unlock, we can
end up with the edge. There seems to be few enough cases without any
calls that is does not seem worth the extra cognitive load to extend
nosplit throughout all of the locking functions.
(2) is a real issue which would cause incorrect ordering, but it is
already handled by switching to the system stack before recording the
lock ordering. Adding / removing nosplit has no effect on this issue.
Change-Id: I94fbd21b2bf928dbf1bf71aabb6788fc0a012829
Reviewed-on: https://go-review.googlesource.com/c/go/+/254367
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Dan Scales <danscales@google.com>
Trust: Michael Pratt <mpratt@google.com>
2020-09-11 10:14:06 -06:00
|
|
|
// See comment on lockWithRank regarding stack splitting.
|
2020-04-15 13:35:24 -06:00
|
|
|
func unlockWithRank(l *mutex) {
|
2023-04-23 13:25:47 -06:00
|
|
|
if l == &debuglock || l == &paniclk || l == &raceFiniLock {
|
2020-04-15 13:35:24 -06:00
|
|
|
// See comment at beginning of lockWithRank.
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
unlock2(l)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
gp := getg()
|
|
|
|
systemstack(func() {
|
|
|
|
found := false
|
|
|
|
for i := gp.m.locksHeldLen - 1; i >= 0; i-- {
|
|
|
|
if gp.m.locksHeld[i].lockAddr == uintptr(unsafe.Pointer(l)) {
|
|
|
|
found = true
|
|
|
|
copy(gp.m.locksHeld[i:gp.m.locksHeldLen-1], gp.m.locksHeld[i+1:gp.m.locksHeldLen])
|
|
|
|
gp.m.locksHeldLen--
|
2020-04-15 13:35:24 -06:00
|
|
|
break
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if !found {
|
|
|
|
println(gp.m.procid, ":", l.rank.String(), l.rank, l)
|
|
|
|
throw("unlock without matching lock acquire")
|
|
|
|
}
|
|
|
|
unlock2(l)
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2024-04-19 16:12:18 -06:00
|
|
|
// releaseLockRankAndM releases a rank which is not associated with a mutex
|
|
|
|
// lock. To maintain the invariant that an M with m.locks==0 does not hold any
|
|
|
|
// lock-like resources, it also releases the M.
|
runtime: drop nosplit from primary lockrank functions
acquireLockRank and releaseLockRank are called from nosplit context, and
thus must be nosplit.
lockWithRank, unlockWithRank, and lockWithRankMayAcquire are called from
spittable context, and thus don't strictly need to be nosplit.
The stated reasoning for making these functions nosplit is to avoid
re-entrant calls due to a stack split on function entry taking a lock.
There are two potential issues at play here:
1. A stack split on function entry adds a new lock ordering edge before
we (a) take lock l, or (b) release lock l.
2. A stack split in a child call (such as to lock2) introduces a new
lock ordering edge _in the wrong order_ because e.g., in the case of
lockWithRank, we've noted that l is taken, but the stack split in
lock2 actually takes stack split locks _before_ l is actually locked.
(1) is indeed avoided by marking these functions nosplit, but this is
really just a bit of duct tape that generally has no effect overall. Any
earlier call can have a stack split and introduce the same new edge.
This includes lock/unlock which are not nosplit!
I began this CL as a change to extend nosplit to lock and unlock to try
to make this mitigation more effective, but I've realized that as long
as there is a _single_ nosplit call between a lock and unlock, we can
end up with the edge. There seems to be few enough cases without any
calls that is does not seem worth the extra cognitive load to extend
nosplit throughout all of the locking functions.
(2) is a real issue which would cause incorrect ordering, but it is
already handled by switching to the system stack before recording the
lock ordering. Adding / removing nosplit has no effect on this issue.
Change-Id: I94fbd21b2bf928dbf1bf71aabb6788fc0a012829
Reviewed-on: https://go-review.googlesource.com/c/go/+/254367
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Dan Scales <danscales@google.com>
Trust: Michael Pratt <mpratt@google.com>
2020-09-11 10:14:06 -06:00
|
|
|
//
|
|
|
|
// This function may be called in nosplit context and thus must be nosplit.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-04-15 13:35:24 -06:00
|
|
|
//go:nosplit
|
2024-04-19 16:12:18 -06:00
|
|
|
func releaseLockRankAndM(rank lockRank) {
|
2020-04-15 13:35:24 -06:00
|
|
|
gp := getg()
|
|
|
|
systemstack(func() {
|
|
|
|
found := false
|
|
|
|
for i := gp.m.locksHeldLen - 1; i >= 0; i-- {
|
|
|
|
if gp.m.locksHeld[i].rank == rank && gp.m.locksHeld[i].lockAddr == 0 {
|
|
|
|
found = true
|
|
|
|
copy(gp.m.locksHeld[i:gp.m.locksHeldLen-1], gp.m.locksHeld[i+1:gp.m.locksHeldLen])
|
|
|
|
gp.m.locksHeldLen--
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if !found {
|
|
|
|
println(gp.m.procid, ":", rank.String(), rank)
|
|
|
|
throw("lockRank release without matching lockRank acquire")
|
|
|
|
}
|
|
|
|
})
|
2024-04-19 16:12:18 -06:00
|
|
|
|
|
|
|
releasem(getg().m)
|
2020-04-15 13:35:24 -06:00
|
|
|
}
|
|
|
|
|
2023-05-08 16:29:52 -06:00
|
|
|
// nosplit because it may be called from nosplit contexts.
|
|
|
|
//
|
|
|
|
//go:nosplit
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
func lockWithRankMayAcquire(l *mutex, rank lockRank) {
|
|
|
|
gp := getg()
|
|
|
|
if gp.m.locksHeldLen == 0 {
|
2021-02-16 18:48:21 -07:00
|
|
|
// No possibility of lock ordering problem if no other locks held
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
systemstack(func() {
|
|
|
|
i := gp.m.locksHeldLen
|
|
|
|
if i >= len(gp.m.locksHeld) {
|
|
|
|
throw("too many locks held concurrently for rank checking")
|
|
|
|
}
|
|
|
|
// Temporarily add this lock to the locksHeld list, so
|
|
|
|
// checkRanks() will print out list, including this lock, if there
|
|
|
|
// is a lock ordering problem.
|
|
|
|
gp.m.locksHeld[i].rank = rank
|
|
|
|
gp.m.locksHeld[i].lockAddr = uintptr(unsafe.Pointer(l))
|
|
|
|
gp.m.locksHeldLen++
|
|
|
|
checkRanks(gp, gp.m.locksHeld[i-1].rank, rank)
|
|
|
|
gp.m.locksHeldLen--
|
|
|
|
})
|
|
|
|
}
|
2020-08-21 09:49:56 -06:00
|
|
|
|
2020-10-30 13:22:52 -06:00
|
|
|
// nosplit to ensure it can be called in as many contexts as possible.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-10-30 13:22:52 -06:00
|
|
|
//go:nosplit
|
2020-08-21 09:49:56 -06:00
|
|
|
func checkLockHeld(gp *g, l *mutex) bool {
|
|
|
|
for i := gp.m.locksHeldLen - 1; i >= 0; i-- {
|
|
|
|
if gp.m.locksHeld[i].lockAddr == uintptr(unsafe.Pointer(l)) {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
// assertLockHeld throws if l is not held by the caller.
|
|
|
|
//
|
|
|
|
// nosplit to ensure it can be called in as many contexts as possible.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-08-21 09:49:56 -06:00
|
|
|
//go:nosplit
|
|
|
|
func assertLockHeld(l *mutex) {
|
|
|
|
gp := getg()
|
|
|
|
|
2020-10-30 13:22:52 -06:00
|
|
|
held := checkLockHeld(gp, l)
|
|
|
|
if held {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// Crash from system stack to avoid splits that may cause
|
|
|
|
// additional issues.
|
2020-08-21 09:49:56 -06:00
|
|
|
systemstack(func() {
|
2020-10-30 13:22:52 -06:00
|
|
|
printlock()
|
|
|
|
print("caller requires lock ", l, " (rank ", l.rank.String(), "), holding:\n")
|
|
|
|
printHeldLocks(gp)
|
|
|
|
throw("not holding required lock!")
|
2020-08-21 09:49:56 -06:00
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
// assertRankHeld throws if a mutex with rank r is not held by the caller.
|
|
|
|
//
|
|
|
|
// This is less precise than assertLockHeld, but can be used in places where a
|
|
|
|
// pointer to the exact mutex is not available.
|
|
|
|
//
|
|
|
|
// nosplit to ensure it can be called in as many contexts as possible.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-08-21 09:49:56 -06:00
|
|
|
//go:nosplit
|
|
|
|
func assertRankHeld(r lockRank) {
|
|
|
|
gp := getg()
|
|
|
|
|
2020-10-30 13:22:52 -06:00
|
|
|
for i := gp.m.locksHeldLen - 1; i >= 0; i-- {
|
|
|
|
if gp.m.locksHeld[i].rank == r {
|
|
|
|
return
|
2020-08-21 09:49:56 -06:00
|
|
|
}
|
2020-10-30 13:22:52 -06:00
|
|
|
}
|
2020-08-21 09:49:56 -06:00
|
|
|
|
2020-10-30 13:22:52 -06:00
|
|
|
// Crash from system stack to avoid splits that may cause
|
|
|
|
// additional issues.
|
|
|
|
systemstack(func() {
|
2020-08-21 09:49:56 -06:00
|
|
|
printlock()
|
|
|
|
print("caller requires lock with rank ", r.String(), "), holding:\n")
|
|
|
|
printHeldLocks(gp)
|
|
|
|
throw("not holding required lock!")
|
|
|
|
})
|
|
|
|
}
|
2020-10-28 16:06:05 -06:00
|
|
|
|
|
|
|
// worldStopped notes that the world is stopped.
|
|
|
|
//
|
|
|
|
// Caller must hold worldsema.
|
|
|
|
//
|
|
|
|
// nosplit to ensure it can be called in as many contexts as possible.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-10-28 16:06:05 -06:00
|
|
|
//go:nosplit
|
|
|
|
func worldStopped() {
|
2022-08-25 20:41:32 -06:00
|
|
|
if stopped := worldIsStopped.Add(1); stopped != 1 {
|
2020-10-30 13:22:52 -06:00
|
|
|
systemstack(func() {
|
|
|
|
print("world stop count=", stopped, "\n")
|
|
|
|
throw("recursive world stop")
|
|
|
|
})
|
2020-10-28 16:06:05 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// worldStarted that the world is starting.
|
|
|
|
//
|
|
|
|
// Caller must hold worldsema.
|
|
|
|
//
|
|
|
|
// nosplit to ensure it can be called in as many contexts as possible.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-10-28 16:06:05 -06:00
|
|
|
//go:nosplit
|
|
|
|
func worldStarted() {
|
2022-08-25 20:41:32 -06:00
|
|
|
if stopped := worldIsStopped.Add(-1); stopped != 0 {
|
2020-10-30 13:22:52 -06:00
|
|
|
systemstack(func() {
|
|
|
|
print("world stop count=", stopped, "\n")
|
|
|
|
throw("released non-stopped world stop")
|
|
|
|
})
|
2020-10-28 16:06:05 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// nosplit to ensure it can be called in as many contexts as possible.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-10-28 16:06:05 -06:00
|
|
|
//go:nosplit
|
|
|
|
func checkWorldStopped() bool {
|
2022-08-25 20:41:32 -06:00
|
|
|
stopped := worldIsStopped.Load()
|
2020-10-28 16:06:05 -06:00
|
|
|
if stopped > 1 {
|
2020-10-30 13:22:52 -06:00
|
|
|
systemstack(func() {
|
|
|
|
print("inconsistent world stop count=", stopped, "\n")
|
|
|
|
throw("inconsistent world stop count")
|
|
|
|
})
|
2020-10-28 16:06:05 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
return stopped == 1
|
|
|
|
}
|
|
|
|
|
|
|
|
// assertWorldStopped throws if the world is not stopped. It does not check
|
|
|
|
// which M stopped the world.
|
|
|
|
//
|
|
|
|
// nosplit to ensure it can be called in as many contexts as possible.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-10-28 16:06:05 -06:00
|
|
|
//go:nosplit
|
|
|
|
func assertWorldStopped() {
|
|
|
|
if checkWorldStopped() {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
throw("world not stopped")
|
|
|
|
}
|
|
|
|
|
|
|
|
// assertWorldStoppedOrLockHeld throws if the world is not stopped and the
|
|
|
|
// passed lock is not held.
|
|
|
|
//
|
|
|
|
// nosplit to ensure it can be called in as many contexts as possible.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-10-28 16:06:05 -06:00
|
|
|
//go:nosplit
|
|
|
|
func assertWorldStoppedOrLockHeld(l *mutex) {
|
|
|
|
if checkWorldStopped() {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
gp := getg()
|
2020-10-30 13:22:52 -06:00
|
|
|
held := checkLockHeld(gp, l)
|
|
|
|
if held {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// Crash from system stack to avoid splits that may cause
|
|
|
|
// additional issues.
|
2020-10-28 16:06:05 -06:00
|
|
|
systemstack(func() {
|
2020-10-30 13:22:52 -06:00
|
|
|
printlock()
|
|
|
|
print("caller requires world stop or lock ", l, " (rank ", l.rank.String(), "), holding:\n")
|
|
|
|
println("<no world stop>")
|
|
|
|
printHeldLocks(gp)
|
|
|
|
throw("no world stop or required lock!")
|
2020-10-28 16:06:05 -06:00
|
|
|
})
|
|
|
|
}
|