TestMutexProfile and TestEmptyCallStack couldn't run multiple times
because they mutate state in runtime (mutex profile counters and
a user-defined profile type) and test whether the state
matches what it is supposed to be after the very first run.
We fix TestMutexProfile by relaxing the expected state condition.
We fix TestEmptyCallStack by creating a new profile with a different
name every time the test runs.
For #25520
Change-Id: I8e50cd9526eb650c8989457495ff90a24ce07863
Reviewed-on: https://go-review.googlesource.com/114495
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The general policy for the current state of js/wasm is that it only
has to support tests that are also supported by nacl.
The test nilptr3.go makes assumptions about which nil checks can be
removed. Since WebAssembly does not signal on reading a null pointer,
all nil checks have to be explicit.
Updates #18892
Change-Id: I06a687860b8d22ae26b1c391499c0f5183e4c485
Reviewed-on: https://go-review.googlesource.com/110096
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The Go's heap profile contains four kinds of samples
(inuse_space, inuse_objects, alloc_space, and alloc_objects).
The pprof tool by default chooses the inuse_space (the bytes
of live, in-use objects). When analyzing the current memory
usage the choice of inuse_space as the default may be useful,
but in some cases, users are more interested in analyzing the
total allocation statistics throughout the program execution.
For example, when we analyze the memory profile from benchmark
or program test run, we are more likely interested in the whole
allocation history than the live heap snapshot at the end of
the test or benchmark.
The pprof tool provides flags to control which sample type
to be used for analysis. However, it is one of the less-known
features of pprof and we believe it's better to choose the
right type of samples as the default when producing the profile.
This CL introduces a new type of profile, "allocs", which is
the same as the "heap" profile but marks the alloc_space
as the default type unlike heap profiles that use inuse_space
as the default type.
'go test -memprofile=...' command is changed to use the new
"allocs" profile type instead of the traditional "heap" profile.
Fixes#24443
Change-Id: I012dd4b6dcacd45644d7345509936b8380b6fbd9
Reviewed-on: https://go-review.googlesource.com/102696
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
Reviewed-by: Russ Cox <rsc@golang.org>
As found by unparam. Picked the low-hanging fruit, consisting only of
errors that were always nil and results that were never used. Left out
those that were useful for consistency with other func signatures.
Change-Id: I06b52bbd3541f8a5d66659c909bd93cb3e172018
Reviewed-on: https://go-review.googlesource.com/102418
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
pprof expects the samples are scaled and reflects unsampled numbers.
The legacy profile parser uses the sampling period in the output
and multiplies all values with the period.
0138a3cd6d/profile/legacy_profile.go (L815)
Apply the same scaling when we output the mutex profile
in the pprof proto format.
Block profile shares the same code, but how to infer unsampled
values is unclear. Legacy profile parser doesn't do anything special
so we do nothing for block profile here.
Tested by checking the profiles reported with debug=0 (proto format)
are similar to the profiles computed from legacy format profile
when the profile rate is a non-trivial number (e.g. 2) manually.
Change-Id: Iaa33f92051deed67d8be43ddffc7c1016db566ca
Reviewed-on: https://go-review.googlesource.com/89295
Reviewed-by: Peter Weinberger <pjw@google.com>
A couple of the CPU profiling testpoints make calls to helper
functions (cpuHog1, for example) where the computed value is always
thrown away by the caller without being used. A smart compiler back
end (in this case LLVM) can detect this fact and delete the contents
of the called function, which can cause tests to fail. Harden the test
slighly by passing in a value read from a global and insuring that the
caller stores the value back to a global; this prevents any optimizer
mischief.
Change-Id: Icbd6e3e32ff299c68a6397dc1404a52b21eaeaab
Reviewed-on: https://go-review.googlesource.com/76230
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
Unlike the legacy text format that outputs the count and the number of
cycles, the pprof tool expects contention profiles to include the count
and the delay time measured in nanoseconds. printCountCycleProfile
performs the conversion from cycles to nanoseconds.
(See parseContention function in
cmd/vendor/github.com/google/pprof/profile/legacy_profile.go)
Fixes#21474
Change-Id: I8e8fb6ea803822d7eaaf9ecf1df3e236ad225a7b
Reviewed-on: https://go-review.googlesource.com/64410
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Since CL 33071, testCPUProfile is only one user of the badOS map.
Replace it by the corresponding switch, with the "plan9" case removed
because it is already checked earlier in the same function.
Change-Id: Id647b8ee1fd37516bb702b35b3c9296a4f56b61b
Reviewed-on: https://go-review.googlesource.com/75110
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
testing.Skip{,f} will exit the test via runtime.Goexit. Thus, the
successive return is never reached and can be removed.
Change-Id: I1e399f3d5db753ece1ffba648850427e1b4be300
Reviewed-on: https://go-review.googlesource.com/74990
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
strings.IndexByte was introduced in go1.2 and it can be used
effectively wherever the second argument to strings.Index is
exactly one byte long.
This avoids generating unnecessary string symbols and saves
a few calls to strings.Index.
Change-Id: I1ab5edb7c4ee9058084cfa57cbcc267c2597e793
Reviewed-on: https://go-review.googlesource.com/65930
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Do the low-hanging fruit - tiny Less functions that are used exactly
once. This reduces the amount of code and puts the logic in a single
place.
Change-Id: I9d4544cd68de5a95e55019bdad1fca0a1dbfae9c
Reviewed-on: https://go-review.googlesource.com/63171
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Because profile labels are copied from the goroutine into the tag
buffer by the signal handler, there's a carefully-crafted set of race
detector annotations to create the necessary happens-before edges
between setting a goroutine's profile label and retrieving it from the
profile tag buffer.
Given the constraints of the signal handler, we have to approximate
the true synchronization behavior. Currently, that approximation is
too weak.
Ideally, runtime_setProfLabel would perform a store-release on
&getg().labels and copying each label into the profile would perform a
load-acquire on &getg().labels. This would create the necessary
happens-before edges through each individual g.labels object.
Since we can't do this in the signal handler, we instead synchronize
on a "labelSync" global. The problem occurs with the following
sequence:
1. Goroutine 1 calls setProfLabel, which does a store-release on
labelSync.
2. Goroutine 2 calls setProfLabel, which does a store-release on
labelSync.
3. Goroutine 3 reads the profile, which does a load-acquire on
labelSync.
The problem is that the load-acquire only synchronizes with the *most
recent* store-release to labelSync, and the two store-releases don't
synchronize with each other. So, once goroutine 3 touches the label
set by goroutine 1, we report a race.
The solution is to use racereleasemerge. This is like a
read-modify-write, rather than just a store-release. Each RMW of
labelSync in runtime_setProfLabel synchronizes with the previous RMW
of labelSync, and this ultimately carries forward to the load-acquire,
so it synchronizes with *all* setProfLabel operations, not just the
most recent.
Change-Id: Iab58329b156122002fff12cfe64fbeacb31c9613
Reviewed-on: https://go-review.googlesource.com/56670
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
64bit atomics on mips/mipsle are implemented using spinlocks. If SIGPROF
is received while the program is in the critical section, it will try to
write the sample using the same spinlock, creating a deadloop.
Prevent it by creating a counter of SIGPROFs during atomic64 and
postpone writing the sample(s) until called from elsewhere, with
pc set to _LostSIGPROFDuringAtomic64.
Added a test case, per Cherry's suggestion. Works around #20146.
Change-Id: Icff504180bae4ee83d78b19c0d9d6a80097087f9
Reviewed-on: https://go-review.googlesource.com/42652
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
The name LabelList was changed to LabelSet during the development of the
proposal [1], except in one function comment. This commit fixes that.
Fixes#20905.
[1] https://github.com/golang/go/issues/17280
Change-Id: Id4f48d59d7d513fa24b2e42795c2baa5ceb78f36
Reviewed-on: https://go-review.googlesource.com/47470
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently, semrelease1 readies the next waiter before recording a
mutex event. However, if the next waiter is expecting to look at the
mutex profile, as is the case in TestMutexProfile, this may delay
recording the event too much.
Swap the order of these operations so semrelease1 records the mutex
event before readying the next waiter. This also means readying the
next waiter is the very last thing semrelease1 does, which seems
appropriate.
Fixes#19139.
Change-Id: I1a62063599fdb5d49bd86061a180c0a2d659474b
Reviewed-on: https://go-review.googlesource.com/45751
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Peter Weinberger <pjw@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
profileBuilder.locForPC returns 0 to mean "no location" because 0 is
an invalid location index. However, the code to build count profiles
doesn't check the result of locForPC, so this 0 location index ends up
in the profile's location list. This, in turn, causes problems later
when we decode the profile because it puts a nil *Location in the
sample's location slice, which can later lead to a nil pointer panic.
Fix this by making printCountProfile correctly discard the result of
locForPC if it returns 0. This makes this call match the other two
calls of locForPC.
Updates #15156.
Change-Id: I4492b3652b513448bc56f4cfece4e37da5e42f94
Reviewed-on: https://go-review.googlesource.com/43630
Reviewed-by: Michael Matloob <matloob@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
TestGoroutineCounts was flaky when running on a system under load.
This happened on three builds the last couple of days.
Fix this by running this test with a single operating system thread, so
we do not depend on the operating system scheduler. 50 000 tests ran
without failure with the new version, the old version failed 0.5% of the
time.
Fixes#15156.
Change-Id: I1e5a18d0fef4f72cc9a56e376822b2849cdb0f8b
Reviewed-on: https://go-review.googlesource.com/43590
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently proto symbolization uses runtime.FuncForPC and assumes each
PC maps to a single frame. This isn't true in the presence of inlining
(even with leaf-only inlining this can get incorrect results).
Change PC symbolization to use runtime.CallersFrames to expand each PC
to all of the frames at that PC.
Change-Id: I8d20dff7495a5de495ae07f569122c225d433ced
Reviewed-on: https://go-review.googlesource.com/41256
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
Proto profile conversion is inconsistent about call vs return PCs in
profile locations. The proto defines locations to be call PCs. This is
what we do when proto-izing CPU profiles, but we fail to convert the
return PCs in memory and count profile stacks to call PCs when
converting them to proto locations.
Fix this in the heap and count profile conversion functions.
TestConvertMemProfile also hard-codes this failure to convert from
return PCs to call PCs, so fix up the addresses in the synthesized
profile to be return PCs while checking that we get call PCs out of
the conversion.
Change-Id: If1fc028b86fceac6d71a2d9fa6c41ff442c89296
Reviewed-on: https://go-review.googlesource.com/42951
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
TestGoroutineCounts currently depends on timing to get 100 goroutines
to a known blocking point before taking a profile. This fails
frequently, with different goroutines captured at different stacks.
The test is disabled on openbsd because it was too flaky, but in fact
it flakes on all platforms.
Fix this by using Gosched instead of timing. This is both much more
reliable and makes the test run faster.
Fixes#15156.
Change-Id: Ia6e894196d717655b8fb4ee96df53f6cc8bc5f1f
Reviewed-on: https://go-review.googlesource.com/42953
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently the pprof tests re-symbolize PCs in profiles, and do so in a
way that can't handle inlining. Proto profiles already contain full
symbol information, so this modifies the tests to use the symbol
information already present in the profile.
Change-Id: I63cd491de7197080fd158b1e4f782630f1bbbb56
Reviewed-on: https://go-review.googlesource.com/41255
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
Profile labels added by the user using pprof.Do, if present will
be in a *labelMap stored in the unsafe.Pointer 'tag' field of
the profile map entry. This change extracts the labels from the tag
field and writes them to the profile proto.
Change-Id: Ic40fdc58b66e993ca91d5d5effe0e04ffbb5bc46
Reviewed-on: https://go-review.googlesource.com/39613
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Delete old TestRuntimeFunctionTrimming, which is testing a dead API
and is now handled in end-to-end tests.
Change-Id: I64fc2991ed4a7690456356b5f6b546f36935bb67
Reviewed-on: https://go-review.googlesource.com/41815
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
TestBlockProfile matches samples against a regexp that accepts "," in
profile PCs. I suspect this was just a syntax mistake. Remove "," from
the character class.
Change-Id: Idcfc20ed6900075abae08597ba71db559e89b37b
Reviewed-on: https://go-review.googlesource.com/41111
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Peter Weinberger <pjw@google.com>
TestBlockProfile currently requires exactly five PCs in each sample.
With more aggressive inlining there may be fewer, so change this test
to use the same pattern as TestMutexProfile, which accepts one or more
PCs. With this change, this test passes when compiled with -l=4.
Change-Id: I1421a6d56c96b77111bdc671d88723a222672fd6
Reviewed-on: https://go-review.googlesource.com/41110
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Lazar <lazard@golang.org>
The period recorded in CPU profiles is in nanoseconds, but was being
computed incorrectly as hz * 1000. As a result, many absolute times
displayed by pprof were incorrect.
Fix this by computing the period correctly.
Change-Id: I6fadd6d8ad3e57f31e8cc7a25a24fcaec510d8d4
Reviewed-on: https://go-review.googlesource.com/40995
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
Reviewed-by: Russ Cox <rsc@golang.org>
This helps systems that maintain an external database mapping
build ID to symbol information for the given binary, especially
in the case where /proc/self/maps lists many different files
(for example, many shared libraries).
Avoid importing debug/elf to avoid dragging in that whole
package (and its dependencies like debug/dwarf) into the
build of every program that generates a profile.
Fixes#19431.
Change-Id: I6d4362a79fe23e4f1726dffb0661d20bb57f766f
Reviewed-on: https://go-review.googlesource.com/37855
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
It's only ever called with the value it was using, but the code was
counterintuitive. Use the parameter instead, like the other funcs near
it.
Found by github.com/mvdan/unparam.
Change-Id: I45855e11d749380b9b2a28e6dd1d5dedf119a19b
Reviewed-on: https://go-review.googlesource.com/37893
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
If the caller passes a large number to Profile.Add,
the list of pcs is empty, which results in junk
(a nil pc) being recorded. Check for that explicitly,
and replace such stack traces with a lostProfileEvent.
Fixes#18836.
Change-Id: I99c96aa67dd5525cd239ea96452e6e8fcb25ce02
Reviewed-on: https://go-review.googlesource.com/36891
Reviewed-by: Russ Cox <rsc@golang.org>
The existing code builds a full profile in memory.
Then it translates that profile into a data structure (in memory).
Then it marshals that data structure into a protocol buffer (in memory).
Then it gzips that marshaled form into the underlying writer.
So there are three copies of the full profile data in memory
at the same time before we're done. This is obviously dumb.
This CL implements a fully streaming conversion from
the original in-memory profile to the underlying writer.
There is now only one copy of the profile in memory.
For the non-CPU profiles, this is optimal, since we have to
have a full copy in memory to start with.
For the CPU profiles, we could still try to bound the profile
size stored in memory and stream fragments out during
the actual profiling, as Go 1.7 did (with a simpler format),
but so far that hasn't been necessary.
Change-Id: Ic36141021857791bf0cd1fce84178fb5e744b989
Reviewed-on: https://go-review.googlesource.com/37164
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
The old hash table was a place holder that allocates memory
during every lookup for key generation, even for keys that hit
in the the table.
Change-Id: I4f601bbfd349f0be76d6259a8989c9c17ccfac21
Reviewed-on: https://go-review.googlesource.com/37163
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
This doesn't change the functionality of the current code,
but it sets us up for exporting the profiling labels into the profile.
The old code had a hash table of profile samples maintained
during the signal handler, with evictions going into a log.
The new code just logs every sample directly, leaving the
hash-based deduplication to an ordinary goroutine.
The new code also avoids storing the entire profile in two
forms in memory, an unfortunate regression introduced
when binary profile support was added. After this CL the
entire profile is only stored once in memory. We'd still like
to get back down to storing it zero times (streaming it to
the underlying io.Writer).
Change-Id: I0893a1788267c564aa1af17970d47377b2a43457
Reviewed-on: https://go-review.googlesource.com/36712
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
Now that we don't rescan stacks, stack barriers are unnecessary. This
removes all of the code and structures supporting them as well as
tests that were specifically for stack barriers.
Updates #17503.
Change-Id: Ia29221730e0f2bbe7beab4fa757f31a032d9690c
Reviewed-on: https://go-review.googlesource.com/36620
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
These are very tightly coupled, and internal/protopprof is small.
There's no point to having a separate package.
Change-Id: I2c8aa49c9e18a7128657bf2b05323860151b5606
Reviewed-on: https://go-review.googlesource.com/36711
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>