2020-07-24 17:50:47 -06:00
|
|
|
// Copyright 2020 The Go Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style
|
|
|
|
// license that can be found in the LICENSE file.
|
|
|
|
|
2021-02-19 16:35:10 -07:00
|
|
|
//go:build !goexperiment.staticlockranking
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
|
|
|
|
package runtime
|
|
|
|
|
2023-04-23 13:25:47 -06:00
|
|
|
const staticLockRanking = false
|
|
|
|
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
// // lockRankStruct is embedded in mutex, but is empty when staticklockranking is
|
|
|
|
// disabled (the default)
|
|
|
|
type lockRankStruct struct {
|
|
|
|
}
|
|
|
|
|
|
|
|
func lockInit(l *mutex, rank lockRank) {
|
|
|
|
}
|
|
|
|
|
|
|
|
func getLockRank(l *mutex) lockRank {
|
|
|
|
return 0
|
|
|
|
}
|
|
|
|
|
2020-04-15 13:35:24 -06:00
|
|
|
func lockWithRank(l *mutex, rank lockRank) {
|
|
|
|
lock2(l)
|
|
|
|
}
|
|
|
|
|
runtime: drop nosplit from primary lockrank functions
acquireLockRank and releaseLockRank are called from nosplit context, and
thus must be nosplit.
lockWithRank, unlockWithRank, and lockWithRankMayAcquire are called from
spittable context, and thus don't strictly need to be nosplit.
The stated reasoning for making these functions nosplit is to avoid
re-entrant calls due to a stack split on function entry taking a lock.
There are two potential issues at play here:
1. A stack split on function entry adds a new lock ordering edge before
we (a) take lock l, or (b) release lock l.
2. A stack split in a child call (such as to lock2) introduces a new
lock ordering edge _in the wrong order_ because e.g., in the case of
lockWithRank, we've noted that l is taken, but the stack split in
lock2 actually takes stack split locks _before_ l is actually locked.
(1) is indeed avoided by marking these functions nosplit, but this is
really just a bit of duct tape that generally has no effect overall. Any
earlier call can have a stack split and introduce the same new edge.
This includes lock/unlock which are not nosplit!
I began this CL as a change to extend nosplit to lock and unlock to try
to make this mitigation more effective, but I've realized that as long
as there is a _single_ nosplit call between a lock and unlock, we can
end up with the edge. There seems to be few enough cases without any
calls that is does not seem worth the extra cognitive load to extend
nosplit throughout all of the locking functions.
(2) is a real issue which would cause incorrect ordering, but it is
already handled by switching to the system stack before recording the
lock ordering. Adding / removing nosplit has no effect on this issue.
Change-Id: I94fbd21b2bf928dbf1bf71aabb6788fc0a012829
Reviewed-on: https://go-review.googlesource.com/c/go/+/254367
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Dan Scales <danscales@google.com>
Trust: Michael Pratt <mpratt@google.com>
2020-09-11 10:14:06 -06:00
|
|
|
// This function may be called in nosplit context and thus must be nosplit.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-08-21 10:44:52 -06:00
|
|
|
//go:nosplit
|
2024-04-19 16:12:18 -06:00
|
|
|
func acquireLockRankAndM(rank lockRank) {
|
|
|
|
acquirem()
|
2020-04-15 13:35:24 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
func unlockWithRank(l *mutex) {
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
unlock2(l)
|
|
|
|
}
|
|
|
|
|
runtime: drop nosplit from primary lockrank functions
acquireLockRank and releaseLockRank are called from nosplit context, and
thus must be nosplit.
lockWithRank, unlockWithRank, and lockWithRankMayAcquire are called from
spittable context, and thus don't strictly need to be nosplit.
The stated reasoning for making these functions nosplit is to avoid
re-entrant calls due to a stack split on function entry taking a lock.
There are two potential issues at play here:
1. A stack split on function entry adds a new lock ordering edge before
we (a) take lock l, or (b) release lock l.
2. A stack split in a child call (such as to lock2) introduces a new
lock ordering edge _in the wrong order_ because e.g., in the case of
lockWithRank, we've noted that l is taken, but the stack split in
lock2 actually takes stack split locks _before_ l is actually locked.
(1) is indeed avoided by marking these functions nosplit, but this is
really just a bit of duct tape that generally has no effect overall. Any
earlier call can have a stack split and introduce the same new edge.
This includes lock/unlock which are not nosplit!
I began this CL as a change to extend nosplit to lock and unlock to try
to make this mitigation more effective, but I've realized that as long
as there is a _single_ nosplit call between a lock and unlock, we can
end up with the edge. There seems to be few enough cases without any
calls that is does not seem worth the extra cognitive load to extend
nosplit throughout all of the locking functions.
(2) is a real issue which would cause incorrect ordering, but it is
already handled by switching to the system stack before recording the
lock ordering. Adding / removing nosplit has no effect on this issue.
Change-Id: I94fbd21b2bf928dbf1bf71aabb6788fc0a012829
Reviewed-on: https://go-review.googlesource.com/c/go/+/254367
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Dan Scales <danscales@google.com>
Trust: Michael Pratt <mpratt@google.com>
2020-09-11 10:14:06 -06:00
|
|
|
// This function may be called in nosplit context and thus must be nosplit.
|
2022-01-30 18:13:43 -07:00
|
|
|
//
|
2020-08-21 10:44:52 -06:00
|
|
|
//go:nosplit
|
2024-04-19 16:12:18 -06:00
|
|
|
func releaseLockRankAndM(rank lockRank) {
|
|
|
|
releasem(getg().m)
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
}
|
|
|
|
|
2024-07-19 11:55:15 -06:00
|
|
|
// This function may be called in nosplit context and thus must be nosplit.
|
|
|
|
//
|
|
|
|
//go:nosplit
|
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes #38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-13 18:34:47 -07:00
|
|
|
func lockWithRankMayAcquire(l *mutex, rank lockRank) {
|
|
|
|
}
|
2020-08-21 09:49:56 -06:00
|
|
|
|
|
|
|
//go:nosplit
|
|
|
|
func assertLockHeld(l *mutex) {
|
|
|
|
}
|
|
|
|
|
|
|
|
//go:nosplit
|
|
|
|
func assertRankHeld(r lockRank) {
|
|
|
|
}
|
2020-10-28 16:06:05 -06:00
|
|
|
|
|
|
|
//go:nosplit
|
|
|
|
func worldStopped() {
|
|
|
|
}
|
|
|
|
|
|
|
|
//go:nosplit
|
|
|
|
func worldStarted() {
|
|
|
|
}
|
|
|
|
|
|
|
|
//go:nosplit
|
|
|
|
func assertWorldStopped() {
|
|
|
|
}
|
|
|
|
|
|
|
|
//go:nosplit
|
|
|
|
func assertWorldStoppedOrLockHeld(l *mutex) {
|
|
|
|
}
|