If g1 sets its labels and then they are copied into a profile buffer
and then g2 reads the profile buffer and inspects the labels,
the race detector must understand that g1's recording of the labels
happens before g2's use of the labels. Make that so.
Fixes race test failure in CL 39613.
Change-Id: Id7cda1c2aac6f8eef49213b5ca414f7154b4acfa
Reviewed-on: https://go-review.googlesource.com/42111
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
More specifically, allow Unicode letters in the directories of GitHub
repositories, which can occur and don't have a valid reason to be
disallowed by go get.
Do so by using a predefined character class, the Unicode character
property class \p{L} that describes the Unicode characters that are
letters:
http://www.regular-expressions.info/unicode.html#category
Since it's not possible to create GitHub usernames or repositories
containing Unicode letters at this time, those parts of the import path
are still restricted to ASCII letters only.
Fix name of tested func in t.Errorf messages.
Fixes#18660.
Change-Id: Ia0ef4742bfd8317d989ef1eb1d7065e382852fe2
Reviewed-on: https://go-review.googlesource.com/41822
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This is the same technique used in CL 24466. By adding a little bit of
size to the binary, we can remove a function call and gain a lot of
performance.
A raw array ([128]bool) would be faster, but is also be 128 bytes
instead of 16.
Running tip on a Mac:
name old time/op new time/op delta
QuoteMetaAll-4 192ns ±12% 120ns ±11% -37.27% (p=0.000 n=10+10)
QuoteMetaNone-4 186ns ± 6% 64ns ± 6% -65.52% (p=0.000 n=10+10)
name old speed new speed delta
QuoteMetaAll-4 73.2MB/s ±11% 116.6MB/s ±10% +59.21% (p=0.000 n=10+10)
QuoteMetaNone-4 139MB/s ± 6% 405MB/s ± 6% +190.74% (p=0.000 n=10+10)
Change-Id: I68ce9fe2ef1c28e2274157789b35b0dd6ae3efb5
Reviewed-on: https://go-review.googlesource.com/41495
Run-TryBot: Kevin Burke <kev@inburke.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Use it to ensure that dowidth is not called
from the backend on a type whose size
has not yet been calculated.
This is an alternative to CL 42016.
Change-Id: I8c7b4410ee4c2a68573102f6b9b635f4fdcf392e
Reviewed-on: https://go-review.googlesource.com/42018
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Now it's not very different from syscall_dragonfly.go in
golang.org/x/sys/unix repository.
Change-Id: I8dfd22e1ebce9dc2cc71ab9ab7f0c92d93b2b762
Reviewed-on: https://go-review.googlesource.com/41835
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This CL mainly moves some work to the switch on w.Op,
to make a follow-up change simpler and clearer.
Updates #19838
Change-Id: I86f3181c380dd60960afcc24224f655276b8956c
Reviewed-on: https://go-review.googlesource.com/42010
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Node.Used was written to from the backend
concurrently with reads of Node.Class
for the same ONAME Nodes.
I do not know why it was not failing consistently
under the race detector, but it is a race.
This is likely also a problem with Node.HasVal and Node.HasOpt.
They will be handled in a separate CL.
Fix Used by moving it to gc.Name and making it a separate bool.
There was one non-Name use of Used, marking OLABELs as used.
That is no longer needed, now that goto and label checking
happens early in the front end.
Leave the getters and setters in place,
to ease changing the representation in the future
(or changing to an interface!).
Updates #20144
Change-Id: I9bec7c6d33dcb129a4cfa9d338462ea33087f9f7
Reviewed-on: https://go-review.googlesource.com/42015
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Type.Size and Type.Alignment are for the front end:
They calculate size and alignment if needed.
Type.MustSize and Type.MustAlignment are for the back end:
They call Fatal if size and alignment are not already calculated.
Most uses are of MustSize and MustAlignment,
but that's because the back end is newer,
and this API was added to support it.
This CL was mostly generated with sed and selective reversion.
The only mildly interesting bit is the change of the ssa.Type interface
and the supporting ssa dummy types.
Follow-up to review feedback on CL 41970.
Passes toolstash-check.
Change-Id: I0d9b9505e57453dae8fb6a236a07a7a02abd459e
Reviewed-on: https://go-review.googlesource.com/42016
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
dowidth is fundamentally unsafe to call from the back end;
it will cause data races.
Replace all calls to dowidth in the backend with
assertions that the width has been calculated.
Then fix all the cases in which that was not so,
including the cases from #20145.
Fixes#20145.
Change-Id: Idba3d19d75638851a30ec2ebcdb703c19da3e92b
Reviewed-on: https://go-review.googlesource.com/41970
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Many (most!) of the values of objapi.SymKind are used only in the linker, so
this creates a separate cmd/link/internal/ld.SymKind type, removes most values
from SymKind and maps one to the other when reading object files in the linker.
Two of the remaining objapi.SymKind values are only checked for, never set and
so will never be actually found but I wanted to keep this to the most
mechanical change possible.
Change-Id: I4bbc5aed6713cab3e8de732e6e288eb77be0474c
Reviewed-on: https://go-review.googlesource.com/40985
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Currently, the following two codes generate the identical dwarf info
for type Foo.
prog 1)
type Foo struct {
Bar
}
prog 2)
type Foo struct {
Bar Bar
}
This change adds a go-specific attribute DW_AT_go_embedded_field
to annotate each member entry. Its absence or false value indicates
the corresponding member is not an embedded field.
Update #20037
Change-Id: Ibcbd2714f3e4d97c7b523d7398f29ab2301cc897
Reviewed-on: https://go-review.googlesource.com/41873
Reviewed-by: David Chase <drchase@google.com>
There's been one failure on the race builder so far,
before we started sorting functions by length.
The race detector can only detect actual races,
and ordering functions by length might reduce the odds
of catching some kinds of races. Give it more to chew on.
Updates #20144
Change-Id: I0206ac182cb98b70a729dea9703ecb0fef54d2d0
Reviewed-on: https://go-review.googlesource.com/41973
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Its sole use is in walk.go. 100% code movement.
gsubr.go increasingly contains backend-y things.
With a few more relocations, it could probably be
fruitfully renamed progs.go.
Change-Id: I61ec5c2bc1f8cfdda64c6d6f580952c154ff60e0
Reviewed-on: https://go-review.googlesource.com/41972
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
They were used only in esc.go. 100% code movement.
Also, remove the rather outdated comment at the top of gen.go.
It's not really clear what gen.go is for any more.
Change-Id: Iaedfe7015ef6f5c11c49f3e6721b15d779a00faa
Reviewed-on: https://go-review.googlesource.com/41971
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
When a constant doesn't fit in a single instruction, use two
paired instructions instead of the constant pool. For example
ADD $0xaa00bb, R0, R1
Used to rewrite to:
MOV ?(IP), R11
ADD R11, R0, R1
Instead, do:
ADD $0xaa0000, R0, R1
ADD $0xbb, R1, R1
Same number of instructions.
Good:
4 less bytes (no constant pool entry)
One less load.
Bad:
Critical path is one instruction longer.
It's probably worth it to avoid the loads, they are expensive.
Dave Cheney got us some performance numbers: https://perf.golang.org/search?q=upload:20170426.1
TL;DR mean 1.37% improvement.
Change-Id: Ib206836161fdc94a3962db6f9caa635c87d57cf1
Reviewed-on: https://go-review.googlesource.com/41612
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
The "lldb start" phase often times out on the iOS builder. Increase
the timeout and see if that helps.
Change-Id: I92fd67cbfa90659600e713198d6b2c5c78dde20f
Reviewed-on: https://go-review.googlesource.com/41863
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Elias Naur <elias.naur@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The current code treats condition as special register and write
its raw data directly into instruction.
The fix converts the raw data into correct condition encoding.
Also fix the operand catogery of FCCMP.
Add tests to cover all cases.
Change-Id: Ib194041bd9017dd0edbc241564fe983082ac616b
Reviewed-on: https://go-review.googlesource.com/41511
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Previously, the package did not distinguish between baseline and
extended sequential images. Both are non-progressive images, but the Th
range differs between the two, as per Annex B of
https://www.w3.org/Graphics/JPEG/itu-t81.pdf
Extended sequential images are often emitted by the Guetzli encoder.
Fixes#19913
Change-Id: I3d0f9e16d5d374ee1c65e3a8fb87519de61cff94
Reviewed-on: https://go-review.googlesource.com/41831
Reviewed-by: David Symonds <dsymonds@golang.org>
When using a concurrent backend,
the overall compilation time is bounded
in part by the slowest function to compile.
The number of top-level statements in a function
is an easily calculated and fairly reliable
proxy for compilation time.
Here's a standard compilecmp output for -c=8 with this CL:
name old time/op new time/op delta
Template 127ms ± 4% 125ms ± 6% -1.33% (p=0.000 n=47+50)
Unicode 84.8ms ± 4% 84.5ms ± 4% ~ (p=0.217 n=49+49)
GoTypes 289ms ± 3% 287ms ± 3% -0.78% (p=0.002 n=48+50)
Compiler 1.36s ± 3% 1.34s ± 2% -1.29% (p=0.000 n=49+47)
SSA 2.95s ± 3% 2.77s ± 4% -6.23% (p=0.000 n=50+49)
Flate 70.7ms ± 3% 70.9ms ± 2% ~ (p=0.112 n=50+49)
GoParser 85.0ms ± 3% 83.0ms ± 4% -2.31% (p=0.000 n=48+49)
Reflect 229ms ± 3% 225ms ± 4% -1.83% (p=0.000 n=49+49)
Tar 70.2ms ± 3% 69.4ms ± 3% -1.17% (p=0.000 n=49+49)
XML 115ms ± 7% 114ms ± 6% ~ (p=0.158 n=49+47)
name old user-time/op new user-time/op delta
Template 352ms ± 5% 342ms ± 8% -2.74% (p=0.000 n=49+50)
Unicode 117ms ± 5% 118ms ± 4% +0.88% (p=0.005 n=46+48)
GoTypes 986ms ± 3% 980ms ± 4% ~ (p=0.110 n=46+48)
Compiler 4.39s ± 2% 4.43s ± 4% +0.97% (p=0.002 n=50+50)
SSA 12.0s ± 2% 13.3s ± 3% +11.33% (p=0.000 n=49+49)
Flate 222ms ± 5% 219ms ± 6% -1.56% (p=0.002 n=50+50)
GoParser 271ms ± 5% 268ms ± 4% -0.83% (p=0.036 n=49+48)
Reflect 560ms ± 4% 571ms ± 3% +1.90% (p=0.000 n=50+49)
Tar 183ms ± 3% 183ms ± 3% ~ (p=0.903 n=45+50)
XML 364ms ±13% 391ms ± 4% +7.16% (p=0.000 n=50+40)
A more interesting way of viewing the data is by
looking at the ratio of the time taken to compile
the slowest-to-compile function to the overall
time spent compiling functions.
If this ratio is small (near 0), then increased concurrency might help.
If this ratio is big (near 1), then we're bounded by that single function.
I instrumented the compiler to emit this ratio per-package,
ran 'go build -a -gcflags=-c=C -p=P std cmd' three times,
for varying values of C and P,
and collected the ratios encountered into an ASCII histogram.
Here's c=1 p=1, which is a non-concurrent backend, single process at a time:
90%|
80%|
70%|
60%|
50%|
40%|
30%|
20%|**
10%|***
0%|*********
----+----------
|0123456789
The x-axis is floor(10*ratio), so the first column indicates the percent of
ratios that fell in the 0% to 9.9999% range.
We can see in this histogram that more concurrency will help;
in most cases, the ratio is small.
Here's c=8 p=1, before this CL:
90%|
80%|
70%|
60%|
50%|
40%|
30%| *
20%| *
10%|* * *
0%|**********
----+----------
|0123456789
In 30-40% of cases, we're mostly bound by the compilation time
of a single function.
Here's c=8 p=1, after this CL:
90%|
80%|
70%|
60%|
50%| *
40%| *
30%| *
20%| *
10%| *
0%|**********
----+----------
|0123456789
The sorting pays off; we are bound by the
compilation time of a single function in over half of packages.
The single * in the histogram indicates 0-10%.
The actual values for this chart are:
0: 5%, 1: 1%, 2: 1%, 3: 4%, 4: 5%, 5: 7%, 6: 7%, 7: 7%, 8: 9%, 9: 55%
This indicates that efforts to increase or enable more concurrency,
e.g. by optimizing mutexes or increasing the value of c,
will probably not yield fruit.
That matches what compilecmp tells us.
Further optimization efforts should thus focus instead on one of:
(1) making more functions compile concurrently
(2) improving the compilation time of the slowest functions
(3) speeding up the remaining serial parts of the compiler
(4) automatically splitting up some large autogenerated functions
into small ones, as discussed in #19751
I hope to spend more time on (1) before the freeze.
Adding process parallelism doesn't change the story much.
For example, here's c=8 p=8, after this CL:
90%|
80%|
70%|
60%|
50%|
40%| *
30%| *
20%| *
10%| ***
0%|**********
----+----------
|0123456789
Since we don't need to worry much about p,
these histograms can help us select a good
general value of c to use as a default,
assuming we're not bounded by GOMAXPROCS.
Here are some charts after this CL, for c from 1 to 8:
c=1 p=1
90%|
80%|
70%|
60%|
50%|
40%|
30%|
20%|**
10%|***
0%|*********
----+----------
|0123456789
c=2 p=1
90%|
80%|
70%|
60%|
50%|
40%|
30%|
20%|
10%| **** *
0%|**********
----+----------
|0123456789
c=3 p=1
90%|
80%|
70%|
60%|
50%|
40%|
30%|
20%| *
10%| ** * *
0%|**********
----+----------
|0123456789
c=4 p=1
90%|
80%|
70%|
60%|
50%|
40%|
30%| *
20%| *
10%| * *
0%|**********
----+----------
|0123456789
c=5 p=1
90%|
80%|
70%|
60%|
50%|
40%|
30%| *
20%| *
10%| * *
0%|**********
----+----------
|0123456789
c=6 p=1
90%|
80%|
70%|
60%|
50%|
40%| *
30%| *
20%| *
10%| *
0%|**********
----+----------
|0123456789
c=7 p=1
90%|
80%|
70%|
60%|
50%| *
40%| *
30%| *
20%| *
10%| **
0%|**********
----+----------
|0123456789
c=8 p=1
90%|
80%|
70%|
60%|
50%| *
40%| *
30%| *
20%| *
10%| *
0%|**********
----+----------
|0123456789
Given the increased user-CPU costs as
c increases, it looks like c=4 is probably
the sweet spot, at least for now.
Pleasingly, this matches (and explains)
the results of the standard benchmarking
that I have done.
Updates #15756
Change-Id: I82b606c06efd34a5dbd1afdbcf66a605905b2aeb
Reviewed-on: https://go-review.googlesource.com/41192
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This CL adds initial support for concurrent backend compilation.
BACKGROUND
The compiler currently consists (very roughly) of the following phases:
1. Initialization.
2. Lexing and parsing into the cmd/compile/internal/syntax AST.
3. Translation into the cmd/compile/internal/gc AST.
4. Some gc AST passes: typechecking, escape analysis, inlining,
closure handling, expression evaluation ordering (order.go),
and some lowering and optimization (walk.go).
5. Translation into the cmd/compile/internal/ssa SSA form.
6. Optimization and lowering of SSA form.
7. Translation from SSA form to assembler instructions.
8. Translation from assembler instructions to machine code.
9. Writing lots of output: machine code, DWARF symbols,
type and reflection info, export data.
Phase 2 was already concurrent as of Go 1.8.
Phase 3 is planned for eventual removal;
we hope to go straight from syntax AST to SSA.
Phases 5–8 are per-function; this CL adds support for
processing multiple functions concurrently.
The slowest phases in the compiler are 5 and 6,
so this offers the opportunity for some good speed-ups.
Unfortunately, it's not quite that straightforward.
In the current compiler, the latter parts of phase 4
(order, walk) are done function-at-a-time as needed.
Making order and walk concurrency-safe proved hard,
and they're not particularly slow, so there wasn't much reward.
To enable phases 5–8 to be done concurrently,
when concurrent backend compilation is requested,
we complete phase 4 for all functions
before starting later phases for any functions.
Also, in reality, we automatically generate new
functions in phase 9, such as method wrappers
and equality and has routines.
Those new functions then go through phases 4–8.
This CL disables concurrent backend compilation
after the first, big, user-provided batch of
functions has been compiled.
This is done to keep things simple,
and because the autogenerated functions
tend to be small, few, simple, and fast to compile.
USAGE
Concurrent backend compilation still defaults to off.
To set the number of functions that may be backend-compiled
concurrently, use the compiler flag -c.
In future work, cmd/go will automatically set -c.
Furthermore, this CL has been intentionally written
so that the c=1 path has no backend concurrency whatsoever,
not even spawning any goroutines.
This helps ensure that, should problems arise
late in the development cycle,
we can simply have cmd/go set c=1 always,
and revert to the original compiler behavior.
MUTEXES
Most of the work required to make concurrent backend
compilation safe has occurred over the past month.
This CL adds a handful of mutexes to get the rest of the way there;
they are the mutexes that I didn't see a clean way to avoid.
Some of them may still be eliminable in future work.
In no particular order:
* gc.funcsymsmu. The global funcsyms slice is populated
lazily when we need function symbols for closures.
This occurs during gc AST to SSA translation.
The function funcsym also does a package lookup,
which is a source of races on types.Pkg.Syms;
funcsymsmu also covers that package lookup.
This mutex is low priority: it adds a single global,
it is in an infrequently used code path, and it is low contention.
Since funcsyms may now be added in any order,
we must sort them to preserve reproducible builds.
* gc.largeStackFramesMu. We don't discover until after SSA compilation
that a function's stack frame is gigantic.
Recording that error happens basically never,
but it does happen concurrently.
Fix with a low priority mutex and sorting.
* obj.Link.hashmu. ctxt.hash stores the mapping from
types.Syms (compiler symbols) to obj.LSyms (linker symbols).
It is accessed fairly heavily through all the phases.
This is the only heavily contended mutex.
* gc.signatlistmu. The global signatlist map is
populated with types through several of the concurrent phases,
including notably via ngotype during DWARF generation.
It is low priority for removal.
* gc.typepkgmu. Looking up symbols in the types package
happens a fair amount during backend compilation
and DWARF generation, particularly via ngotype.
This mutex helps us to avoid a broader mutex on types.Pkg.Syms.
It has low-to-moderate contention.
* types.internedStringsmu. gc AST to SSA conversion and
some SSA work introduce new autotmps.
Those autotmps have their names interned to reduce allocations.
That interning requires protecting types.internedStrings.
The autotmp names are heavily re-used, and the mutex
overhead and contention here are low, so it is probably
a worthwhile performance optimization to keep this mutex.
TESTING
I have been testing this code locally by running
'go install -race cmd/compile'
and then doing
'go build -a -gcflags=-c=128 std cmd'
for all architectures and a variety of compiler flags.
This obviously needs to be made part of the builders,
but it is too expensive to make part of all.bash.
I have filed #19962 for this.
REPRODUCIBLE BUILDS
This version of the compiler generates reproducible builds.
Testing reproducible builds also needs automation, however,
and is also too expensive for all.bash.
This is #19961.
Also of note is that some of the compiler flags used by 'toolstash -cmp'
are currently incompatible with concurrent backend compilation.
They still work fine with c=1.
Time will tell whether this is a problem.
NEXT STEPS
* Continue to find and fix races and bugs,
using a combination of code inspection, fuzzing,
and hopefully some community experimentation.
I do not know of any outstanding races,
but there probably are some.
* Improve testing.
* Improve performance, for many values of c.
* Integrate with cmd/go and fine tune.
* Support concurrent compilation with the -race flag.
It is a sad irony that it does not yet work.
* Minor code cleanup that has been deferred during
the last month due to uncertainty about the
ultimate shape of this CL.
PERFORMANCE
Here's the buried lede, at last. :)
All benchmarks are from my 8 core 2.9 GHz Intel Core i7 darwin/amd64 laptop.
First, going from tip to this CL with c=1 has almost no impact.
name old time/op new time/op delta
Template 195ms ± 3% 194ms ± 5% ~ (p=0.370 n=30+29)
Unicode 86.6ms ± 3% 87.0ms ± 7% ~ (p=0.958 n=29+30)
GoTypes 548ms ± 3% 555ms ± 4% +1.35% (p=0.001 n=30+28)
Compiler 2.51s ± 2% 2.54s ± 2% +1.17% (p=0.000 n=28+30)
SSA 5.16s ± 3% 5.16s ± 2% ~ (p=0.910 n=30+29)
Flate 124ms ± 5% 124ms ± 4% ~ (p=0.947 n=30+30)
GoParser 146ms ± 3% 146ms ± 3% ~ (p=0.150 n=29+28)
Reflect 354ms ± 3% 352ms ± 4% ~ (p=0.096 n=29+29)
Tar 107ms ± 5% 106ms ± 3% ~ (p=0.370 n=30+29)
XML 200ms ± 4% 201ms ± 4% ~ (p=0.313 n=29+28)
[Geo mean] 332ms 333ms +0.10%
name old user-time/op new user-time/op delta
Template 227ms ± 5% 225ms ± 5% ~ (p=0.457 n=28+27)
Unicode 109ms ± 4% 109ms ± 5% ~ (p=0.758 n=29+29)
GoTypes 713ms ± 4% 721ms ± 5% ~ (p=0.051 n=30+29)
Compiler 3.36s ± 2% 3.38s ± 3% ~ (p=0.146 n=30+30)
SSA 7.46s ± 3% 7.47s ± 3% ~ (p=0.804 n=30+29)
Flate 146ms ± 7% 147ms ± 3% ~ (p=0.833 n=29+27)
GoParser 179ms ± 5% 179ms ± 5% ~ (p=0.866 n=30+30)
Reflect 431ms ± 4% 429ms ± 4% ~ (p=0.593 n=29+30)
Tar 124ms ± 5% 123ms ± 5% ~ (p=0.140 n=29+29)
XML 243ms ± 4% 242ms ± 7% ~ (p=0.404 n=29+29)
[Geo mean] 415ms 415ms +0.02%
name old obj-bytes new obj-bytes delta
Template 382k ± 0% 382k ± 0% ~ (all equal)
Unicode 203k ± 0% 203k ± 0% ~ (all equal)
GoTypes 1.18M ± 0% 1.18M ± 0% ~ (all equal)
Compiler 3.98M ± 0% 3.98M ± 0% ~ (all equal)
SSA 8.28M ± 0% 8.28M ± 0% ~ (all equal)
Flate 230k ± 0% 230k ± 0% ~ (all equal)
GoParser 287k ± 0% 287k ± 0% ~ (all equal)
Reflect 1.00M ± 0% 1.00M ± 0% ~ (all equal)
Tar 190k ± 0% 190k ± 0% ~ (all equal)
XML 416k ± 0% 416k ± 0% ~ (all equal)
[Geo mean] 660k 660k +0.00%
Comparing this CL to itself, from c=1 to c=2
improves real times 20-30%, costs 5-10% more CPU time,
and adds about 2% alloc.
The allocation increase comes from allocating more ssa.Caches.
name old time/op new time/op delta
Template 202ms ± 3% 149ms ± 3% -26.15% (p=0.000 n=49+49)
Unicode 87.4ms ± 4% 84.2ms ± 3% -3.68% (p=0.000 n=48+48)
GoTypes 560ms ± 2% 398ms ± 2% -28.96% (p=0.000 n=49+49)
Compiler 2.46s ± 3% 1.76s ± 2% -28.61% (p=0.000 n=48+46)
SSA 6.17s ± 2% 4.04s ± 1% -34.52% (p=0.000 n=49+49)
Flate 126ms ± 3% 92ms ± 2% -26.81% (p=0.000 n=49+48)
GoParser 148ms ± 4% 107ms ± 2% -27.78% (p=0.000 n=49+48)
Reflect 361ms ± 3% 281ms ± 3% -22.10% (p=0.000 n=49+49)
Tar 109ms ± 4% 86ms ± 3% -20.81% (p=0.000 n=49+47)
XML 204ms ± 3% 144ms ± 2% -29.53% (p=0.000 n=48+45)
name old user-time/op new user-time/op delta
Template 246ms ± 9% 246ms ± 4% ~ (p=0.401 n=50+48)
Unicode 109ms ± 4% 111ms ± 4% +1.47% (p=0.000 n=44+50)
GoTypes 728ms ± 3% 765ms ± 3% +5.04% (p=0.000 n=46+50)
Compiler 3.33s ± 3% 3.41s ± 2% +2.31% (p=0.000 n=49+48)
SSA 8.52s ± 2% 9.11s ± 2% +6.93% (p=0.000 n=49+47)
Flate 149ms ± 4% 161ms ± 3% +8.13% (p=0.000 n=50+47)
GoParser 181ms ± 5% 192ms ± 2% +6.40% (p=0.000 n=49+46)
Reflect 452ms ± 9% 474ms ± 2% +4.99% (p=0.000 n=50+48)
Tar 126ms ± 6% 136ms ± 4% +7.95% (p=0.000 n=50+49)
XML 247ms ± 5% 264ms ± 3% +6.94% (p=0.000 n=48+50)
name old alloc/op new alloc/op delta
Template 38.8MB ± 0% 39.3MB ± 0% +1.48% (p=0.008 n=5+5)
Unicode 29.8MB ± 0% 30.2MB ± 0% +1.19% (p=0.008 n=5+5)
GoTypes 113MB ± 0% 114MB ± 0% +0.69% (p=0.008 n=5+5)
Compiler 443MB ± 0% 447MB ± 0% +0.95% (p=0.008 n=5+5)
SSA 1.25GB ± 0% 1.26GB ± 0% +0.89% (p=0.008 n=5+5)
Flate 25.3MB ± 0% 25.9MB ± 1% +2.35% (p=0.008 n=5+5)
GoParser 31.7MB ± 0% 32.2MB ± 0% +1.59% (p=0.008 n=5+5)
Reflect 78.2MB ± 0% 78.9MB ± 0% +0.91% (p=0.008 n=5+5)
Tar 26.6MB ± 0% 27.0MB ± 0% +1.80% (p=0.008 n=5+5)
XML 42.4MB ± 0% 43.4MB ± 0% +2.35% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
Template 379k ± 0% 378k ± 0% ~ (p=0.421 n=5+5)
Unicode 322k ± 0% 321k ± 0% ~ (p=0.222 n=5+5)
GoTypes 1.14M ± 0% 1.14M ± 0% ~ (p=0.548 n=5+5)
Compiler 4.12M ± 0% 4.11M ± 0% -0.14% (p=0.032 n=5+5)
SSA 9.72M ± 0% 9.72M ± 0% ~ (p=0.421 n=5+5)
Flate 234k ± 1% 234k ± 0% ~ (p=0.421 n=5+5)
GoParser 316k ± 1% 315k ± 0% ~ (p=0.222 n=5+5)
Reflect 980k ± 0% 979k ± 0% ~ (p=0.095 n=5+5)
Tar 249k ± 1% 249k ± 1% ~ (p=0.841 n=5+5)
XML 392k ± 0% 391k ± 0% ~ (p=0.095 n=5+5)
From c=1 to c=4, real time is down ~40%, CPU usage up 10-20%, alloc up ~5%:
name old time/op new time/op delta
Template 203ms ± 3% 131ms ± 5% -35.45% (p=0.000 n=50+50)
Unicode 87.2ms ± 4% 84.1ms ± 2% -3.61% (p=0.000 n=48+47)
GoTypes 560ms ± 4% 310ms ± 2% -44.65% (p=0.000 n=50+49)
Compiler 2.47s ± 3% 1.41s ± 2% -43.10% (p=0.000 n=50+46)
SSA 6.17s ± 2% 3.20s ± 2% -48.06% (p=0.000 n=49+49)
Flate 126ms ± 4% 74ms ± 2% -41.06% (p=0.000 n=49+48)
GoParser 148ms ± 4% 89ms ± 3% -39.97% (p=0.000 n=49+50)
Reflect 360ms ± 3% 242ms ± 3% -32.81% (p=0.000 n=49+49)
Tar 108ms ± 4% 73ms ± 4% -32.48% (p=0.000 n=50+49)
XML 203ms ± 3% 119ms ± 3% -41.56% (p=0.000 n=49+48)
name old user-time/op new user-time/op delta
Template 246ms ± 9% 287ms ± 9% +16.98% (p=0.000 n=50+50)
Unicode 109ms ± 4% 118ms ± 5% +7.56% (p=0.000 n=46+50)
GoTypes 735ms ± 4% 806ms ± 2% +9.62% (p=0.000 n=50+50)
Compiler 3.34s ± 4% 3.56s ± 2% +6.78% (p=0.000 n=49+49)
SSA 8.54s ± 3% 10.04s ± 3% +17.55% (p=0.000 n=50+50)
Flate 149ms ± 6% 176ms ± 3% +17.82% (p=0.000 n=50+48)
GoParser 181ms ± 5% 213ms ± 3% +17.47% (p=0.000 n=50+50)
Reflect 453ms ± 6% 499ms ± 2% +10.11% (p=0.000 n=50+48)
Tar 126ms ± 5% 149ms ±11% +18.76% (p=0.000 n=50+50)
XML 246ms ± 5% 287ms ± 4% +16.53% (p=0.000 n=49+50)
name old alloc/op new alloc/op delta
Template 38.8MB ± 0% 40.4MB ± 0% +4.21% (p=0.008 n=5+5)
Unicode 29.8MB ± 0% 30.9MB ± 0% +3.68% (p=0.008 n=5+5)
GoTypes 113MB ± 0% 116MB ± 0% +2.71% (p=0.008 n=5+5)
Compiler 443MB ± 0% 455MB ± 0% +2.75% (p=0.008 n=5+5)
SSA 1.25GB ± 0% 1.27GB ± 0% +1.84% (p=0.008 n=5+5)
Flate 25.3MB ± 0% 26.9MB ± 1% +6.31% (p=0.008 n=5+5)
GoParser 31.7MB ± 0% 33.2MB ± 0% +4.61% (p=0.008 n=5+5)
Reflect 78.2MB ± 0% 80.2MB ± 0% +2.53% (p=0.008 n=5+5)
Tar 26.6MB ± 0% 27.9MB ± 0% +5.19% (p=0.008 n=5+5)
XML 42.4MB ± 0% 44.6MB ± 0% +5.20% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
Template 380k ± 0% 379k ± 0% -0.39% (p=0.032 n=5+5)
Unicode 321k ± 0% 321k ± 0% ~ (p=0.841 n=5+5)
GoTypes 1.14M ± 0% 1.14M ± 0% ~ (p=0.421 n=5+5)
Compiler 4.12M ± 0% 4.14M ± 0% +0.52% (p=0.008 n=5+5)
SSA 9.72M ± 0% 9.76M ± 0% +0.37% (p=0.008 n=5+5)
Flate 234k ± 1% 234k ± 1% ~ (p=0.690 n=5+5)
GoParser 316k ± 0% 317k ± 1% ~ (p=0.841 n=5+5)
Reflect 981k ± 0% 981k ± 0% ~ (p=1.000 n=5+5)
Tar 250k ± 0% 249k ± 1% ~ (p=0.151 n=5+5)
XML 393k ± 0% 392k ± 0% ~ (p=0.056 n=5+5)
Going beyond c=4 on my machine tends to increase CPU time and allocs
without impacting real time.
The CPU time numbers matter, because when there are many concurrent
compilation processes, that will impact the overall throughput.
The numbers above are in many ways the best case scenario;
we can take full advantage of all cores.
Fortunately, the most common compilation scenario is incremental
re-compilation of a single package during a build/test cycle.
Updates #15756
Change-Id: I6725558ca2069edec0ac5b0d1683105a9fff6bea
Reviewed-on: https://go-review.googlesource.com/40693
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When using Lstat against symlinks that point to a directory,
the function returns FileInfo with both ModeDir and ModeSymlink set.
Change that to never set ModeDir if ModeSymlink is set.
Fixes#10424Fixes#17540Fixes#17541
Change-Id: Iba280888aad108360b8c1f18180a24493fe7ad2b
Reviewed-on: https://go-review.googlesource.com/41830
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Taken from the Intel Software Development Manual (of course, in the line
below it's ADC DST, SRC; The opposite of the commit subject).
12 /r ADC r8, r/m8
We need 0x12 for the corresponding ytab line, not 0x10.
{Ymb, Ynone, Yrb, Zm_r, 1},
Updates #14069
Change-Id: Id37cbd0c581c9988c2de355efa908956278e2189
Reviewed-on: https://go-review.googlesource.com/41857
Reviewed-by: Keith Randall <khr@golang.org>
Delete old TestRuntimeFunctionTrimming, which is testing a dead API
and is now handled in end-to-end tests.
Change-Id: I64fc2991ed4a7690456356b5f6b546f36935bb67
Reviewed-on: https://go-review.googlesource.com/41815
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
This is a direct port of the version from
commit a60ad46e0ed33d02e09bda439efaf9c9727dbc6c
(https://go-review.googlesource.com/c/37342/).
updates #17973
updates #18177
Change-Id: I63fa5ef6951b1edd39f84927d1181a4df9b15385
Reviewed-on: https://go-review.googlesource.com/36617
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Follow-up to review comments on CL 41797.
Mask the input to set2 and set3, so that at the very least,
we won't corrupt the rest of the flags in case of a bad input.
It also seems more semantically appropriate.
Do minor cleanup in addrescapes. I started on larger cleanup,
but it wasn't clear that it was an improvement.
Add warning comments and sanity checks to Initorder and Class constants,
to attempt to prevent them from overflowing their allotted flag bits.
Passes toolstash-check.
Change-Id: I57b9661ba36f56406aa7a1d8da9b7c70338f9119
Reviewed-on: https://go-review.googlesource.com/41817
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When the stack register is decremented to acquire stack space at
the beginning of a function, a MOVDU should be used so it is done
atomically, unless the size of the stack frame is too large for
that instruction. The code to determine whether to use MOVDU
or MOVD was checking if the function was a leaf and always generating MOVD
when it was. The choice of MOVD vs. MOVDU should only depend on the stack
frame size. This fixes that problem.
Change-Id: I0e49c79036f1e8f7584179e1442b938fc6da085f
Reviewed-on: https://go-review.googlesource.com/41813
Reviewed-by: Michael Munday <munday@ca.ibm.com>
In many cases the records returned by Reader.Read will only be used between calls
to Read and become garbage once a new record is read. In this case, instead of
allocating a new slice on each call to Read, we can reuse the last allocated slice
for successive calls to avoid unnecessary allocations.
This change adds a new field ReuseRecord to the Reader struct to enable this reuse.
ReuseRecord is false by default to avoid breaking existing code which dependss on
the current behaviour.
I also added 4 new benchmarks, corresponding to the existing Read benchmarks, which
set ReuseRecord to true.
Benchstat on my local machine (old is ReuseRecord = false, new is ReuseRecord = true)
name old time/op new time/op delta
Read-8 2.75µs ± 2% 1.88µs ± 1% -31.52% (p=0.000 n=14+15)
ReadWithFieldsPerRecord-8 2.75µs ± 0% 1.89µs ± 1% -31.43% (p=0.000 n=13+13)
ReadWithoutFieldsPerRecord-8 2.77µs ± 1% 1.88µs ± 1% -32.06% (p=0.000 n=15+15)
ReadLargeFields-8 55.4µs ± 1% 54.2µs ± 0% -2.07% (p=0.000 n=15+14)
name old alloc/op new alloc/op delta
Read-8 664B ± 0% 24B ± 0% -96.39% (p=0.000 n=15+15)
ReadWithFieldsPerRecord-8 664B ± 0% 24B ± 0% -96.39% (p=0.000 n=15+15)
ReadWithoutFieldsPerRecord-8 664B ± 0% 24B ± 0% -96.39% (p=0.000 n=15+15)
ReadLargeFields-8 3.94kB ± 0% 2.98kB ± 0% -24.39% (p=0.000 n=15+15)
name old allocs/op new allocs/op delta
Read-8 18.0 ± 0% 8.0 ± 0% -55.56% (p=0.000 n=15+15)
ReadWithFieldsPerRecord-8 18.0 ± 0% 8.0 ± 0% -55.56% (p=0.000 n=15+15)
ReadWithoutFieldsPerRecord-8 18.0 ± 0% 8.0 ± 0% -55.56% (p=0.000 n=15+15)
ReadLargeFields-8 24.0 ± 0% 12.0 ± 0% -50.00% (p=0.000 n=15+15)
Fixes#19721
Change-Id: I79b14128bb9bb3465f53f40f93b1b528a9da6f58
Reviewed-on: https://go-review.googlesource.com/41730
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Some large testing/build systems require some form of test discovery before
running tests. This usually allows for analytics, history, and stats on a per
tests basis. Typically these systems are meant used in multi-language
environments and the original source code is not known or available.
This adds a -test.list option which takes a regular expression as an
argument. Any tests, benchmarks, or examples that match that regular
expression will be printed, one per line, to stdout and then the program
will exit.
Since subtests are named/discovered at run time this will only show
top-level tests names and is a known limitation.
Fixes#17209
Change-Id: I7e607f5f4f084d623a1cae88a1f70e7d92b7f13e
Reviewed-on: https://go-review.googlesource.com/41195
Reviewed-by: Marcel van Lohuizen <mpvl@golang.org>
Run-TryBot: Marcel van Lohuizen <mpvl@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>