1
0
mirror of https://github.com/golang/go synced 2024-10-04 16:11:21 -06:00
Commit Graph

97 Commits

Author SHA1 Message Date
David du Colombier
9f9c9abb7e cmd/8g, cmd/gc: fix warnings on Plan 9
warning: src/cmd/8g/ggen.c:35 non-interruptable temporary
warning: src/cmd/gc/walk.c:656 set and not used: l
warning: src/cmd/gc/walk.c:658 set and not used: l

LGTM=minux.ma
R=golang-codereviews, minux.ma
CC=golang-codereviews
https://golang.org/cl/83660043
2014-04-02 21:33:50 +02:00
Keith Randall
47acf16709 cmd/gc: Don't zero more than we need.
Don't merge with the zero range, we may
end up zeroing more than we need.

LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/83430044
2014-04-02 09:17:42 -07:00
Keith Randall
383963b506 runtime: zero at start of frame more efficiently.
Use Duff's device for zeroing.  Combine adjacent regions.

Update #7680
Update #7624

LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/83200045
2014-04-01 19:44:07 -07:00
Russ Cox
9c8f11ff96 cmd/5g, cmd/8g: fix build
Botched during CL 83090046.

TBR=khr
CC=golang-codereviews
https://golang.org/cl/83070046
2014-04-01 20:24:53 -04:00
Russ Cox
daca06f2e3 cmd/gc: shorten more temporary lifetimes
1. In functions with heap-allocated result variables or with
defer statements, the return sequence requires more than
just a single RET instruction. There is an optimization that
arranges for all returns to jump to a single copy of the return
epilogue in this case. Unfortunately, that optimization is
fundamentally incompatible with PC-based liveness information:
it takes PCs at many different points in the function and makes
them all land at one PC, making the combined liveness information
at that target PC a mess. Disable this optimization, so that each
return site gets its own copy of the 'call deferreturn' and the
copying of result variables back from the heap.
This removes quite a few spurious 'ambiguously live' variables.

2. Let orderexpr allocate temporaries that are passed by address
to a function call and then die on return, so that we can arrange
an appropriate VARKILL.

2a. Do this for ... slices.

2b. Do this for closure structs.

2c. Do this for runtime.concatstring, which is the implementation
of large string additions. Change representation of OADDSTR to
an explicit list in typecheck to avoid reconstructing list in both
walk and order.

3. Let orderexpr allocate the temporary variable copies used for
range loops, so that they can be killed when the loop is over.
Similarly, let it allocate the temporary holding the map iterator.

CL 81940043 reduced the number of ambiguously live temps
in the godoc binary from 860 to 711.

This CL reduces the number to 121. Still more to do, but another
good checkpoint.

Update #7345

LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/83090046
2014-04-01 20:02:54 -04:00
Keith Randall
6c7cbf086c runtime: get rid of most uses of REP for copying/zeroing.
REP MOVSQ and REP STOSQ have a really high startup overhead.
Use a Duff's device to do the repetition instead.

benchmark                 old ns/op     new ns/op     delta
BenchmarkClearFat32       7.20          1.60          -77.78%
BenchmarkCopyFat32        6.88          2.38          -65.41%
BenchmarkClearFat64       7.15          3.20          -55.24%
BenchmarkCopyFat64        6.88          3.44          -50.00%
BenchmarkClearFat128      9.53          5.34          -43.97%
BenchmarkCopyFat128       9.27          5.56          -40.02%
BenchmarkClearFat256      13.8          9.53          -30.94%
BenchmarkCopyFat256       13.5          10.3          -23.70%
BenchmarkClearFat512      22.3          18.0          -19.28%
BenchmarkCopyFat512       22.0          19.7          -10.45%
BenchmarkCopyFat1024      36.5          38.4          +5.21%
BenchmarkClearFat1024     35.1          35.0          -0.28%

TODO: use for stack frame zeroing
TODO: REP prefixes are still used for "reverse" copying when src/dst
regions overlap.  Might be worth fixing.

LGTM=rsc
R=golang-codereviews, rsc
CC=golang-codereviews, r
https://golang.org/cl/81370046
2014-04-01 12:51:02 -07:00
Russ Cox
b700cb4974 cmd/gc: shorten temporary lifetimes when possible
The new channel and map runtime routines take pointers
to values, typically temporaries. Without help, the compiler
cannot tell when those temporaries stop being needed,
because it isn't sure what happened to the pointer.
Arrange to insert explicit VARKILL instructions for these
temporaries so that the liveness analysis can avoid seeing
them as "ambiguously live".

The change is made in order.c, which was already in charge of
introducing temporaries to preserve the order-of-evaluation
guarantees. Now its job has expanded to include introducing
temporaries as needed by runtime routines, and then also
inserting the VARKILL annotations for all these temporaries,
so that their lifetimes can be shortened.

In order to do its job for the map runtime routines, order.c arranges
that all map lookups or map assignments have the form:

        x = m[k]
        x, y = m[k]
        m[k] = x

where x, y, and k are simple variables (often temporaries).
Likewise, receiving from a channel is now always:

        x = <-c

In order to provide the map guarantee, order.c is responsible for
rewriting x op= y into x = x op y, so that m[k] += z becomes

        t = m[k]
        t2 = t + z
        m[k] = t2

While here, fix a few bugs in order.c's traversal: it was failing to
walk into select and switch case bodies, so order of evaluation
guarantees were not preserved in those situations.
Added tests to test/reorder2.go.

Fixes #7671.

In gc/popt's temporary-merging optimization, allow merging
of temporaries with their address taken as long as the liveness
ranges do not intersect. (There is a good chance of that now
that we have VARKILL annotations to limit the liveness range.)

Explicitly killing temporaries cuts the number of ambiguously
live temporaries that must be zeroed in the godoc binary from
860 to 711, or -17%. There is more work to be done, but this
is a good checkpoint.

Update #7345

LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/81940043
2014-04-01 13:31:38 -04:00
Russ Cox
6722d45631 cmd/gc: liveness-related bug fixes
1. On entry to a function, only zero the ambiguously live stack variables.
Before, we were zeroing all stack variables containing pointers.
The zeroing is pretty inefficient right now (issue 7624), but there are also
too many stack variables detected as ambiguously live (issue 7345),
and that must be addressed before deciding how to improve the zeroing code.
(Changes in 5g/ggen.c, 6g/ggen.c, 8g/ggen.c, gc/pgen.c)

Fixes #7647.

2. Make the regopt word-based liveness analysis preserve the
whole-variable liveness property expected by the garbage collection
bitmap liveness analysis. That is, if the regopt liveness decides that
one word in a struct needs to be preserved, make sure it preserves
the entire struct. This is particularly important for multiword values
such as strings, slices, and interfaces, in which all the words need
to be present in order to understand the meaning.
(Changes in 5g/reg.c, 6g/reg.c, 8g/reg.c.)

Fixes #7591.

3. Make the regopt word-based liveness analysis treat a variable
as having its address taken - which makes it preserved across
all future calls - whenever n->addrtaken is set, for consistency
with the gc bitmap liveness analysis, even if there is no machine
instruction actually taking the address. In this case n->addrtaken
is incorrect (a nicer way to put it is overconservative), and ideally
there would be no such cases, but they can happen and the two
analyses need to agree.
(Changes in 5g/reg.c, 6g/reg.c, 8g/reg.c; test in bug484.go.)

Fixes crashes found by turning off "zero everything" in step 1.

4. Remove spurious VARDEF annotations. As the comment in
gc/pgen.c explains, the VARDEF must immediately precede
the initialization. It cannot be too early, and it cannot be too late.
In particular, if a function call sits between the VARDEF and the
actual machine instructions doing the initialization, the variable
will be treated as live during that function call even though it is
uninitialized, leading to problems.
(Changes in gc/gen.c; test in live.go.)

Fixes crashes found by turning off "zero everything" in step 1.

5. Do not treat loading the address of a wide value as a signal
that the value must be initialized. Instead depend on the existence
of a VARDEF or the first actual read/write of a word in the value.
If the load is in order to pass the address to a function that does
the actual initialization, treating the load as an implicit VARDEF
causes the same problems as described in step 4.
The alternative is to arrange to zero every such value before
passing it to the real initialization function, but this is a much
easier and more efficient change.
(Changes in gc/plive.c.)

Fixes crashes found by turning off "zero everything" in step 1.

6. Treat wide input parameters with their address taken as
initialized on entry to the function. Otherwise they look
"ambiguously live" and we will try to emit code to zero them.
(Changes in gc/plive.c.)

Fixes crashes found by turning off "zero everything" in step 1.

7. An array of length 0 has no pointers, even if the element type does.
Without this change, the zeroing code complains when asked to
clear a 0-length array.
(Changes in gc/reflect.c.)

LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/80160044
2014-03-27 14:05:57 -04:00
Keith Randall
14664050b9 cmd/6g: do small zeroings with straightline code.
Removes most uses of the REP prefix, which has a high startup cost.

LGTM=iant
R=golang-codereviews, iant, khr
CC=golang-codereviews
https://golang.org/cl/77920043
2014-03-19 15:41:34 -07:00
Josh Bleecher Snyder
d97d37188a 5g, 8g: remove dead code
maxstksize is superfluous and appears to be vestigial. 6g does not use it.

c >= 4 cannot occur; c = w % 4.

LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/68750043
2014-02-25 14:43:53 -08:00
Dave Cheney
7c8280c9ef all: merge NaCl branch (part 1)
See golang.org/s/go13nacl for design overview.

This CL is the mostly mechanical changes from rsc's Go 1.2 based NaCl branch, specifically 39cb35750369 to 500771b477cf from https://code.google.com/r/rsc-go13nacl. This CL does not include working NaCl support, there are probably two or three more large merges to come.

CL 15750044 is not included as it involves more invasive changes to the linker which will need to be merged separately.

The exact change lists included are

15050047: syscall: support for Native Client
15360044: syscall: unzip implementation for Native Client
15370044: syscall: Native Client SRPC implementation
15400047: cmd/dist, cmd/go, go/build, test: support for Native Client
15410048: runtime: support for Native Client
15410049: syscall: file descriptor table for Native Client
15410050: syscall: in-memory file system for Native Client
15440048: all: update +build lines for Native Client port
15540045: cmd/6g, cmd/8g, cmd/gc: support for Native Client
15570045: os: support for Native Client
15680044: crypto/..., hash/crc32, reflect, sync/atomic: support for amd64p32
15690044: net: support for Native Client
15690048: runtime: support for fake time like on Go Playground
15690051: build: disable various tests on Native Client

LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/68150047
2014-02-25 09:47:42 -05:00
Russ Cox
1ca1cbea65 cmd/5g, cmd/8g: zero ambiguously live values on entry
The code here is being restored after its deletion in CL 14430048.

I restored the copy in cmd/6g in CL 56430043 but neglected the
other two.

This is the reason that enabling precisestack only worked on amd64.

LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/66170043
2014-02-19 17:08:55 -05:00
Russ Cox
7a7c0ffb47 cmd/gc: correct liveness for fat variables
The VARDEF placement must be before the initialization
but after any final use. If you have something like s = ... using s ...
the rhs must be evaluated, then the VARDEF, then the lhs
assigned.

There is a large comment in pgen.c on gvardef explaining
this in more detail.

This CL also includes Ian's suggestions from earlier CLs,
namely commenting the use of mode in link.h and fixing
the precedence of the ~r check in dcl.c.

This CL enables the check that if liveness analysis decides
a variable is live on entry to the function, that variable must
be a function parameter (not a result, and not a local variable).
If this check fails, it indicates a bug in the liveness analysis or
in the generated code being analyzed.

The race detector generates invalid code for append(x, y...).
The code declares a temporary t and then uses cap(t) before
initializing t. The new liveness check catches this bug and
stops the compiler from writing out the buggy code.
Consequently, this CL disables the race detector tests in
run.bash until the race detector bug can be fixed
(golang.org/issue/7334).

Except for the race detector bug, the liveness analysis check
does not detect any problems (this CL and the previous CLs
fixed all the detected problems).

The net test still fails with GOGC=0 but the rest of the tests
now pass or time out (because GOGC=0 is so slow).

TBR=iant
CC=golang-codereviews
https://golang.org/cl/64170043
2014-02-15 10:58:55 -05:00
Russ Cox
7addda685d cmd/5g, cmd/8g: fix build
The test added in CL 63630043 fails on 5g and 8g because they
were not emitting the VARDEF instruction when clearing a fat
value by clearing the components. 6g had the call in the right place.

Hooray tests.

TBR=iant
CC=golang-codereviews
https://golang.org/cl/63660043
2014-02-13 22:30:35 -05:00
Russ Cox
801e40a0a4 cmd/gc: rename AFATVARDEF to AVARDEF
The "fat" referred to being used for multiword values only.
We're going to use it for non-fat values sometimes too.

No change other than the renaming.

TBR=iant
CC=golang-codereviews
https://golang.org/cl/63650043
2014-02-13 22:17:22 -05:00
Anthony Martin
2cae0591cd cmd/cc, cmd/gc, cmd/ld: consolidate print format routines
We now use the %A, %D, %P, and %R routines from liblink
across the board.

Fixes #7178.
Fixes #7055.

LGTM=iant
R=golang-codereviews, gobot, rsc, dave, iant, remyoudompheng
CC=golang-codereviews
https://golang.org/cl/49170043
2014-02-12 14:29:11 -05:00
Russ Cox
f606c1be80 cmd/5g, cmd/6g, cmd/8g: use liblink
Preparation for golang.org/s/go13linker work.

This CL does not build by itself. It depends on 35740044
and 35790044 and will be submitted at the same time.

R=iant
CC=golang-dev
https://golang.org/cl/34590045
2013-12-08 22:51:55 -05:00
Carl Shapiro
f056daf075 cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector.  This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".

Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live.  Pointers
confound the determination of what definitions reach a given
instruction.  In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.

R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 17:35:22 -08:00
Russ Cox
aa0439ba65 cmd/gc: eliminate redundant &x.Field nil checks
This eliminates ~75% of the nil checks being emitted,
on all architectures. We can do better, but we need
a bit more general support from the compiler, and
I don't want to do that so close to Go 1.2.
What's here is simple but effective and safe.

A few small code generation cleanups were required
to make the analysis consistent on all systems about
which nil checks are omitted, at least in the test.

Fixes #6019.

R=ken2
CC=golang-dev
https://golang.org/cl/13334052
2013-09-17 16:54:22 -04:00
Russ Cox
af2a3193af cmd/5g, cmd/6g, cmd/8g: simplify for loop in bitmap generation
Lucio De Re reports that the more complex
loop miscompiles on Plan 9.

R=ken2
CC=golang-dev
https://golang.org/cl/13602043
2013-09-06 16:49:11 -04:00
Russ Cox
3b4d792606 cmd/gc: separate "has pointers" from "needs zeroing" in stack frame
When the new call site-specific frame bitmaps are available,
we can cut the zeroing to just those values that need it due
to scope escaping.

R=cshapiro, cshapiro
CC=golang-dev
https://golang.org/cl/13045043
2013-08-16 21:45:59 -04:00
Carl Shapiro
d3b04f46b5 cmd/5g, cmd/6g, cmd/8g: update frame zeroing for new bitmap format
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12740046
2013-08-16 01:15:04 -04:00
Russ Cox
999a36f9af cmd/gc: &x panics if x does
See golang.org/s/go12nil.

This CL is about getting all the right checks inserted.
A followup CL will add an optimization pass to
remove redundant checks.

R=ken2
CC=golang-dev
https://golang.org/cl/12970043
2013-08-15 14:38:32 -04:00
Russ Cox
7910cd68b5 cmd/gc: zero pointers on entry to function
On entry to a function, zero the results and zero the pointer
section of the local variables.

This is an intermediate step on the way to precise collection
of Go frames.

This can incur a significant (up to 30%) slowdown, but it also ensures
that the garbage collector never looks at a word in a Go frame
and sees a stale pointer value that could cause a space leak.
(C frames and assembly frames are still possibly problematic.)

This CL is required to start making collection of interface values
as precise as collection of pointer values are today.
Since we have to dereference the interface type to understand
whether the value is a pointer, it is critical that the type field be
initialized.

A future CL by Carl will make the garbage collection pointer
bitmaps context-sensitive. At that point it will be possible to
remove most of the zeroing. The only values that will still need
zeroing are values whose addresses escape the block scoping
of the function but do not escape to the heap.

benchmark                         old ns/op    new ns/op    delta
BenchmarkBinaryTree17            4420289180   4331060459   -2.02%
BenchmarkFannkuch11              3442469663   3277706251   -4.79%
BenchmarkFmtFprintfEmpty                100          142  +42.00%
BenchmarkFmtFprintfString               262          310  +18.32%
BenchmarkFmtFprintfInt                  213          281  +31.92%
BenchmarkFmtFprintfIntInt               355          431  +21.41%
BenchmarkFmtFprintfPrefixedInt          321          383  +19.31%
BenchmarkFmtFprintfFloat                444          533  +20.05%
BenchmarkFmtManyArgs                   1380         1559  +12.97%
BenchmarkGobDecode                 10240054     11794915  +15.18%
BenchmarkGobEncode                 17350274     19970478  +15.10%
BenchmarkGzip                     455179460    460699139   +1.21%
BenchmarkGunzip                   114271814    119291574   +4.39%
BenchmarkHTTPClientServer             89051        89894   +0.95%
BenchmarkJSONEncode                40486799     52691558  +30.15%
BenchmarkJSONDecode                94193361    112428781  +19.36%
BenchmarkMandelbrot200              4747060      4748043   +0.02%
BenchmarkGoParse                    6363798      6675098   +4.89%
BenchmarkRegexpMatchEasy0_32            129          171  +32.56%
BenchmarkRegexpMatchEasy0_1K            365          395   +8.22%
BenchmarkRegexpMatchEasy1_32            106          152  +43.40%
BenchmarkRegexpMatchEasy1_1K            952         1245  +30.78%
BenchmarkRegexpMatchMedium_32           198          283  +42.93%
BenchmarkRegexpMatchMedium_1K         79006       101097  +27.96%
BenchmarkRegexpMatchHard_32            3478         5115  +47.07%
BenchmarkRegexpMatchHard_1K          110245       163582  +48.38%
BenchmarkRevcomp                  777384355    793270857   +2.04%
BenchmarkTemplate                 136713089    157093609  +14.91%
BenchmarkTimeParse                     1511         1761  +16.55%
BenchmarkTimeFormat                     535          850  +58.88%

benchmark                          old MB/s     new MB/s  speedup
BenchmarkGobDecode                    74.95        65.07    0.87x
BenchmarkGobEncode                    44.24        38.43    0.87x
BenchmarkGzip                         42.63        42.12    0.99x
BenchmarkGunzip                      169.81       162.67    0.96x
BenchmarkJSONEncode                   47.93        36.83    0.77x
BenchmarkJSONDecode                   20.60        17.26    0.84x
BenchmarkGoParse                       9.10         8.68    0.95x
BenchmarkRegexpMatchEasy0_32         247.24       186.31    0.75x
BenchmarkRegexpMatchEasy0_1K        2799.20      2591.93    0.93x
BenchmarkRegexpMatchEasy1_32         299.31       210.44    0.70x
BenchmarkRegexpMatchEasy1_1K        1074.71       822.45    0.77x
BenchmarkRegexpMatchMedium_32          5.04         3.53    0.70x
BenchmarkRegexpMatchMedium_1K         12.96        10.13    0.78x
BenchmarkRegexpMatchHard_32            9.20         6.26    0.68x
BenchmarkRegexpMatchHard_1K            9.29         6.26    0.67x
BenchmarkRevcomp                     326.95       320.40    0.98x
BenchmarkTemplate                     14.19        12.35    0.87x

R=cshapiro
CC=golang-dev
https://golang.org/cl/12616045
2013-08-09 23:10:58 -04:00
Dmitriy Vyukov
8679d5f2b5 cmd/gc: record argument size for all indirect function calls
This is required to properly unwind reflect.methodValueCall/makeFuncStub.
Fixes #5954.
Stats for 'go install std':
61849 total INSTCALL
24655 currently have ArgSize metadata
27278 have ArgSize metadata with this change
godoc size before: 11351888, after: 11364288

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12163043
2013-07-31 20:00:33 +04:00
Daniel Morsing
85a7c090c4 cmd/8g: Make clearfat non-interleaved with pointer calculations.
clearfat (used to zero initialize structures) will use AX for x86 block ops. If we write to AX while calculating the dest pointer, we will fill the structure with incorrect values.
Since 64-bit arithmetic uses AX to synthesize a 64-bit register, getting an adress by indexing with 64-bit ops can clobber the register.

Fixes #5820.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/11383043
2013-07-17 11:04:34 +02:00
Russ Cox
7b3c8b7ac8 cmd/5g, cmd/6g, cmd/8g: insert arg size annotations on runtime calls
If calling a function in package runtime, emit argument size
information around the call in case the call is to a variadic C function.

R=ken2
CC=golang-dev
https://golang.org/cl/11371043
2013-07-16 16:25:10 -04:00
Russ Cox
7e97d39879 cmd/5g, cmd/6g, cmd/8g: fix line number of caller of deferred func
Deferred functions are not run by a call instruction. They are run by
the runtime editing registers to make the call start with a caller PC
returning to a
        CALL deferreturn
instruction.

That instruction has always had the line number of the function's
closing brace, but that instruction's line number is irrelevant.
Stack traces show the line number of the instruction before the
return PC, because normally that's what started the call. Not so here.
The instruction before the CALL deferreturn could be almost anywhere
in the function; it's unrelated and its line number is incorrect to show.

Fix the line number by inserting a true hardware no-op with the right
line number before the returned-to CALL instruction. That is, the deferred
calls now appear to start with a caller PC returning to the second instruction
in this sequence:
        NOP
        CALL deferreturn

The traceback will show the line number of the NOP, which we've set
to be the line number of the function's closing brace.

The NOP here is not the usual pseudo-instruction, which would be
elided by the linker. Instead it is the real hardware instruction:
XCHG AX, AX on 386 and amd64, and AND.EQ R0, R0, R0 on ARM.

Fixes #5856.

R=ken2, ken
CC=golang-dev
https://golang.org/cl/11223043
2013-07-12 13:47:55 -04:00
Russ Cox
0713293374 cmd/5g, cmd/6g, cmd/8g: fix comment
Keeping the string "compactframe" because that's what
I always search for to find this code. But point to the real place too.

TBR=iant
CC=golang-dev
https://golang.org/cl/10676047
2013-06-28 12:06:25 -07:00
Russ Cox
1f51d27922 cmd/gc: move genembedtramp into portable code
Requires adding new linker instruction
        RET	f(SB)
meaning return but then immediately call f.
This is what you'd use to implement a tail call after
fiddling with the arguments, but the compiler only
uses it in genwrapper.

This CL eliminates the copy-and-paste genembedtramp
functions from 5g/8g/6g and makes the code run on ARM
for the first time. It removes a small special case for function
generation, which should help Carl a bit, but at the same time
it does not bother to implement general tail call optimization,
which we do not want anyway.

Fixes #5627.

R=ken2
CC=golang-dev
https://golang.org/cl/10057044
2013-06-11 09:41:49 -04:00
Russ Cox
1d5dc4fd48 cmd/gc: emit explicit type information for local variables
The type information is (and for years has been) included
as an extra field in the address chunk of an instruction.
Unfortunately, suppose there is a string at a+24(FP) and
we have an instruction reading its length. It will say:

        MOVQ x+32(FP), AX

and the type of *that* argument is int (not slice), because
it is the length being read. This confuses the picture seen
by debuggers and now, worse, by the garbage collector.

Instead of attaching the type information to all uses,
emit an explicit list of TYPE instructions with the information.
The TYPE instructions are no-ops whose only role is to
provide an address to attach type information to.

For example, this function:

        func f(x, y, z int) (a, b string) {
                return
        }

now compiles into:

        --- prog list "f" ---
        0000 (/Users/rsc/x.go:3) TEXT    f+0(SB),$0-56
        0001 (/Users/rsc/x.go:3) LOCALS  ,
        0002 (/Users/rsc/x.go:3) TYPE    x+0(FP){int},$8
        0003 (/Users/rsc/x.go:3) TYPE    y+8(FP){int},$8
        0004 (/Users/rsc/x.go:3) TYPE    z+16(FP){int},$8
        0005 (/Users/rsc/x.go:3) TYPE    a+24(FP){string},$16
        0006 (/Users/rsc/x.go:3) TYPE    b+40(FP){string},$16
        0007 (/Users/rsc/x.go:3) MOVQ    $0,b+40(FP)
        0008 (/Users/rsc/x.go:3) MOVQ    $0,b+48(FP)
        0009 (/Users/rsc/x.go:3) MOVQ    $0,a+24(FP)
        0010 (/Users/rsc/x.go:3) MOVQ    $0,a+32(FP)
        0011 (/Users/rsc/x.go:4) RET     ,

The { } show the formerly hidden type information.
The { } syntax is used when printing from within the gc compiler.
It is not accepted by the assemblers.

The same type information is now included on global variables:

0055 (/Users/rsc/x.go:15) GLOBL   slice+0(SB){[]string},$24(AL*0)

This more accurate type information fixes a bug in the
garbage collector's precise heap collection.

The linker only cares about globals right now, but having the
local information should make things a little nicer for Carl
in the future.

Fixes #4907.

R=ken2
CC=golang-dev
https://golang.org/cl/7395056
2013-02-25 12:13:47 -05:00
Russ Cox
9f647288ef cmd/gc: avoid runtime code generation for closures
Change ARM context register to R7, to get out of the way
of the register allocator during the compilation of the
prologue statements (it wants to use R0 as a temporary).

Step 2 of http://golang.org/s/go11func.

R=ken2
CC=golang-dev
https://golang.org/cl/7369048
2013-02-22 14:25:50 -05:00
Russ Cox
6066fdcf38 cmd/6g, cmd/8g: switch to DX for indirect call block
runtime: add context argument to gogocall

Too many other things use AX, and at least one
(stack zeroing) cannot be moved onto a different
register. Use the less special DX instead.

Preparation for step 2 of http://golang.org/s/go11func.
Nothing interesting here, just split out so that we can
see it's correct before moving on.

R=ken2
CC=golang-dev
https://golang.org/cl/7395050
2013-02-22 10:47:54 -05:00
Russ Cox
1903ad7189 cmd/gc, reflect, runtime: switch to indirect func value representation
Step 1 of http://golang.org/s/go11func.

R=golang-dev, r, daniel.morsing, remyoudompheng
CC=golang-dev
https://golang.org/cl/7393045
2013-02-21 17:01:13 -05:00
Russ Cox
ac1015e7f3 cmd/8g: fix sse2 compare code gen
Fixes #4785.

R=ken2
CC=golang-dev
https://golang.org/cl/7300109
2013-02-14 14:49:04 -05:00
Russ Cox
7594440ef1 cmd/8g: add a few missing splitclean
Fixes #887.

R=ken2
CC=golang-dev
https://golang.org/cl/7303061
2013-02-07 17:55:25 -05:00
Rémy Oudompheng
9afb34b42e cmd/dist, cmd/8g: implement GO386=387/sse to choose FPU flavour.
A new environment variable GO386 is introduced to choose between
code generation targeting 387 or SSE2. No auto-detection is
performed and the setting defaults to 387 to preserve previous
behaviour.

The patch is a reorganization of CL6549052 by rsc.

Fixes #3912.

R=minux.ma, rsc
CC=golang-dev
https://golang.org/cl/6962043
2013-01-02 22:55:23 +01:00
Rémy Oudompheng
4d3cbfdefa cmd/8g: introduce temporaries in byte multiplication.
Also restore the smallintconst case for binary ops.

Fixes #3835.

R=daniel.morsing, rsc
CC=golang-dev
https://golang.org/cl/6999043
2012-12-21 23:46:16 +01:00
Rémy Oudompheng
a617d06252 cmd/6g, cmd/8g: simplify integer division code.
Change suggested by iant. The compiler generates
special code for a/b when a is -0x80...0 and b = -1.
A single instruction can cover the case where b is -1,
so only one comparison is needed.

Fixes #3551.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6922049
2012-12-12 08:35:08 +01:00
Rémy Oudompheng
4cc9de9147 cmd/gc: add division rewrite to walk pass.
This allows 5g and 8g to benefit from the rewrite as shifts
or magic multiplies. The 64-bit arithmetic is not handled there,
and left in 6g.

Update #2230.

R=golang-dev, dave, mtj, iant, rsc
CC=golang-dev
https://golang.org/cl/6819123
2012-11-26 23:45:22 +01:00
Rémy Oudompheng
022b361ae2 cmd/5g, cmd/6g, cmd/8g: remove width check for componentgen.
The move to 64-bit ints in 6g made componentgen ineffective.
In componentgen, the code already selects which values it can handle.

On amd64:
benchmark                 old ns/op    new ns/op    delta
BenchmarkBinaryTree17    9477970000   9582314000   +1.10%
BenchmarkFannkuch11      5928750000   5255080000  -11.36%
BenchmarkGobDecode         37103040     31451120  -15.23%
BenchmarkGobEncode         16042490     16844730   +5.00%
BenchmarkGzip             811337400    741373600   -8.62%
BenchmarkGunzip           197928700    192844500   -2.57%
BenchmarkJSONEncode       224164100    140064200  -37.52%
BenchmarkJSONDecode       258346800    231829000  -10.26%
BenchmarkMandelbrot200      7561780      7601615   +0.53%
BenchmarkParse             12970340     11624360  -10.38%
BenchmarkRevcomp         1969917000   1699137000  -13.75%
BenchmarkTemplate         296182000    263117400  -11.16%

R=nigeltao, dave, daniel.morsing
CC=golang-dev
https://golang.org/cl/6821052
2012-11-01 14:36:08 +01:00
Rémy Oudompheng
6feb61325a cmd/6g, cmd/8g: fix two "out of fixed registers" cases.
In two cases, registers were allocated too early resulting
in exhausting of available registers when nesting these
operations.

The case of method calls was due to missing cases in igen,
which only makes calls but doesn't allocate a register for
the result.

The case of 8-bit multiplication was due to a wrong order
in register allocation when Ullman numbers were bigger on the
RHS.

Fixes #3907.
Fixes #4156.

R=rsc
CC=golang-dev, remy
https://golang.org/cl/6560054
2012-09-26 21:17:11 +02:00
Rémy Oudompheng
b45b6fd1c7 cmd/6g, cmd/8g: do not LEA[LQ] interfaces when calling methods.
It is enough to load directly the data word and the itab word
from memory, so we save a LEA instruction for each method call,
and allow elimination of some extra temporaries.

Update #1914.

R=daniel.morsing, rsc
CC=golang-dev, remy
https://golang.org/cl/6501110
2012-09-11 08:45:23 +02:00
Rémy Oudompheng
ae0862c1ec cmd/8g: import componentgen from 6g.
This makes the compilers code more similar and improves
code generation a lot.

The number of LEAL instructions generated for cmd/go drops
by 60%.

% GOARCH=386 go build -gcflags -S -a cmd/go | grep LEAL | wc -l
Before:       89774
After:        47548

benchmark                              old ns/op    new ns/op    delta
BenchmarkAppendFloatDecimal                  540          444  -17.78%
BenchmarkAppendFloat                        1160         1035  -10.78%
BenchmarkAppendFloatExp                     1060          922  -13.02%
BenchmarkAppendFloatNegExp                  1053          920  -12.63%
BenchmarkAppendFloatBig                     1773         1558  -12.13%
BenchmarkFormatInt                         13065        12481   -4.47%
BenchmarkAppendInt                         10981         9900   -9.84%
BenchmarkFormatUint                         3804         3650   -4.05%
BenchmarkAppendUint                         3506         3303   -5.79%
BenchmarkUnquoteEasy                         714          683   -4.34%
BenchmarkUnquoteHard                        5117         2915  -43.03%

Update #1914.

R=nigeltao, rsc, golang-dev
CC=golang-dev, remy
https://golang.org/cl/6489067
2012-09-09 20:30:08 +02:00
Luuk van Dijk
40af78c19e cmd/gc: inline slice[arr,str] in the frontend (mostly).
R=rsc, ality, rogpeppe, minux.ma, dave
CC=golang-dev
https://golang.org/cl/5966075
2012-06-02 22:50:57 -04:00
Russ Cox
001b75c942 cmd/gc: contiguous loop layout
Drop expecttaken function in favor of extra argument
to gbranch and bgen. Mark loop condition as likely to
be true, so that loops are generated inline.

The main benefit here is contiguous code when trying
to read the generated assembly. It has only minor effects
on the timing, and they mostly cancel the minor effects
that aligning function entry points had.  One exception:
both changes made Fannkuch faster.

Compared to before CL 6244066 (before aligned functions)
benchmark                 old ns/op    new ns/op    delta
BenchmarkBinaryTree17    4222117400   4201958800   -0.48%
BenchmarkFannkuch11      3462631800   3215908600   -7.13%
BenchmarkGobDecode         20887622     20899164   +0.06%
BenchmarkGobEncode          9548772      9439083   -1.15%
BenchmarkGzip                151687       152060   +0.25%
BenchmarkGunzip                8742         8711   -0.35%
BenchmarkJSONEncode        62730560     62686700   -0.07%
BenchmarkJSONDecode       252569180    252368960   -0.08%
BenchmarkMandelbrot200      5267599      5252531   -0.29%
BenchmarkRevcomp25M       980813500    985248400   +0.45%
BenchmarkTemplate         361259100    357414680   -1.06%

Compared to tip (aligned functions):
benchmark                 old ns/op    new ns/op    delta
BenchmarkBinaryTree17    4140739800   4201958800   +1.48%
BenchmarkFannkuch11      3259914400   3215908600   -1.35%
BenchmarkGobDecode         20620222     20899164   +1.35%
BenchmarkGobEncode          9384886      9439083   +0.58%
BenchmarkGzip                150333       152060   +1.15%
BenchmarkGunzip                8741         8711   -0.34%
BenchmarkJSONEncode        65210990     62686700   -3.87%
BenchmarkJSONDecode       249394860    252368960   +1.19%
BenchmarkMandelbrot200      5273394      5252531   -0.40%
BenchmarkRevcomp25M       996013800    985248400   -1.08%
BenchmarkTemplate         360620840    357414680   -0.89%

R=ken2
CC=golang-dev
https://golang.org/cl/6245069
2012-05-30 18:07:39 -04:00
Russ Cox
fefae6eed1 cmd/6g, cmd/8g: move panicindex calls out of line
The old code generated for a bounds check was
                CMP
                JLT ok
                CALL panicindex
        ok:
                ...

The new code is (once the linker finishes with it):
                CMP
                JGE panic
                ...
        panic:
                CALL panicindex

which moves the calls out of line, putting more useful
code in each cache line.  This matters especially in tight
loops, such as in Fannkuch.  The benefit is more modest
elsewhere, but real.

From test/bench/go1, amd64:

benchmark                old ns/op    new ns/op    delta
BenchmarkBinaryTree17   6096092000   6088808000   -0.12%
BenchmarkFannkuch11     6151404000   4020463000  -34.64%
BenchmarkGobDecode        28990050     28894630   -0.33%
BenchmarkGobEncode        12406310     12136730   -2.17%
BenchmarkGzip               179923       179903   -0.01%
BenchmarkGunzip              11219        11130   -0.79%
BenchmarkJSONEncode       86429350     86515900   +0.10%
BenchmarkJSONDecode      334593800    315728400   -5.64%
BenchmarkRevcomp25M     1219763000   1180767000   -3.20%
BenchmarkTemplate        492947600    483646800   -1.89%

And 386:

benchmark                old ns/op    new ns/op    delta
BenchmarkBinaryTree17   6354902000   6243000000   -1.76%
BenchmarkFannkuch11     8043769000   7326965000   -8.91%
BenchmarkGobDecode        19010800     18941230   -0.37%
BenchmarkGobEncode        14077500     13792460   -2.02%
BenchmarkGzip               194087       193619   -0.24%
BenchmarkGunzip              12495        12457   -0.30%
BenchmarkJSONEncode      125636400    125451400   -0.15%
BenchmarkJSONDecode      696648600    685032800   -1.67%
BenchmarkRevcomp25M     2058088000   2052545000   -0.27%
BenchmarkTemplate        602140000    589876800   -2.04%

To implement this, two new instruction forms:

        JLT target      // same as always
        JLT $0, target  // branch expected not taken
        JLT $1, target  // branch expected taken

The linker could also emit the prediction prefixes, but it
does not: expected taken branches are reversed so that the
expected case is not taken (as in example above), and
the default expectaton for such a jump is not taken
already.

R=golang-dev, gri, r, dave
CC=golang-dev
https://golang.org/cl/6248049
2012-05-29 12:09:27 -04:00
Russ Cox
c6ce44822c cmd/gc: faster code, mainly for rotate
* Eliminate bounds check on known small shifts.
* Rewrite x<<s | x>>(32-s) as a rotate (constant s).
* More aggressive (but still minimal) range analysis.

R=ken, dave, iant
CC=golang-dev
https://golang.org/cl/6209077
2012-05-24 17:20:07 -04:00
Rémy Oudompheng
f2ad374ae6 cmd/gc: don't believe that variables mentioned 256 times are unused.
Such variables would be put at 0(SP), leading to serious
corruptions at zero initialization.
Fixes #3084.

R=golang-dev, r
CC=golang-dev, remy
https://golang.org/cl/5683052
2012-02-21 16:38:01 +11:00
Russ Cox
4fb3c4f765 gc: fix div bug
R=ken2
CC=golang-dev
https://golang.org/cl/4950052
2011-08-30 08:47:28 -04:00