For consistency with other code, as that was the only use of
memcopy outside of alg.goc.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/122030044
The gccgo version of USED only accepts a single variable, so
this simplifies merging.
LGTM=minux, dave
R=golang-codereviews, minux, dave
CC=golang-codereviews
https://golang.org/cl/115630043
Popular tools both add incorrect trailing zeroes to the zip
extras, and popular tools accept trailing zeros. We seemed to
be the only ones being strict here. Stop being strict. :(
Fixes#8186
LGTM=ruiu, adg, dave
R=adg, ruiu, dave
CC=frohrweck, golang-codereviews
https://golang.org/cl/117550044
This CL removes sockaddrToAddr functions from socket creation
operations to avoid the bug like issue 7183.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/105100046
Avoid some pressure on the global mutex by lifting the call to userType
out of the closure.
TOTH to Matt Harden.
LGTM=crawshaw, ruiu
R=golang-codereviews, crawshaw, ruiu
CC=golang-codereviews
https://golang.org/cl/117520043
A good cleanup anyway, and it makes some room for an additional
field needed for issue 8412.
Update #8412
LGTM=iant
R=iant, khr
CC=golang-codereviews
https://golang.org/cl/112700043
6a and 8a rearrange memmove such that the fallthrough from move_1or2 to move_0 ends up being a JMP to a RET. Insert an explicit RET to prevent such silliness.
Do the same for memclr as prophylaxis.
benchmark old ns/op new ns/op delta
BenchmarkMemmove1 4.59 4.13 -10.02%
BenchmarkMemmove2 4.58 4.13 -9.83%
LGTM=khr
R=golang-codereviews, dvyukov, minux, ruiu, bradfitz, khr
CC=golang-codereviews
https://golang.org/cl/120930043
Create proper closures so hash functions can be called
directly from Go. Rearrange calling convention so return
value is directly accessible.
LGTM=dvyukov
R=golang-codereviews, dvyukov, dave, khr
CC=golang-codereviews
https://golang.org/cl/119360043
Several reasons:
1. Significantly simplifies runtime.
2. This code proved to be buggy.
3. Free is incompatible with bump-the-pointer allocation.
4. We want to write runtime in Go, Go does not have free.
5. Too much code to free env strings on startup.
LGTM=khr
R=golang-codereviews, josharian, tracey.brendan, khr
CC=bradfitz, golang-codereviews, r, rlh, rsc
https://golang.org/cl/116390043
Stand-alone this test is fine. Run together with
others, however, the stack used can actually go
negative because other tests are freeing stack
during its execution.
This behavior is new with the new stack allocator.
The old allocator never returned (min-sized) stacks.
This test is fairly poor - it needs to run in
isolation to be accurate. Maybe we should delete it.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/119330044
The DISPATCH and CALLFN macro definitions depend on an inconsistency
between the internal cpp mini-implementation and the language proper in
whether center-dot is an identifier character. The macro depends on it not
being an identifier character, but the resulting code depends on it being one.
Remove the dependence on the inconsistency by placing the center-dot into
the macro invocation rather that the body.
No semantic change. This is just renaming macro arguments.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/119320043
This change introduces gomallocgc, a Go clone of mallocgc.
Only a few uses have been moved over, so there are still
lots of uses from C. Many of these C uses will be moved
over to Go (e.g. in slice.goc), but probably not all.
What should remain of C's mallocgc is an open question.
LGTM=rsc, dvyukov
R=rsc, khr, dave, bradfitz, dvyukov
CC=golang-codereviews
https://golang.org/cl/108840046
If a user sets his/her own boundary string with SetBoundary,
we don't need to call randomBoundary at all.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/95760043
preparing for the syscall package freeze.
««« original CL description
syscall: consolidate, simplify socket options for Unix-like systems
Also exposes common socket option functions on Solaris.
Update #7174
Update #7175
LGTM=aram
R=golang-codereviews, aram
CC=golang-codereviews
https://golang.org/cl/107280044
»»»
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/121880043
preparing for the syscall package freeze.
««« original CL description
syscall: regenerate z-files for darwin
Updates z-files from 10.7 kernel-based to 10.9 kernel-based.
LGTM=iant
R=golang-codereviews, bradfitz, iant
CC=golang-codereviews
https://golang.org/cl/102610045
»»»
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/114530044
This is meant to share my progress on Issue 8275, if it's useful to you. I'm not familiar with the formatter's internals, so this change is likely naive.
Change these calls to measure width in runes not bytes:
fmt.Printf("(%5q)\n", '§')
fmt.Printf("(%3c)\n", '§')
Fixes#8275.
LGTM=r
R=golang-codereviews, r
CC=golang-codereviews
https://golang.org/cl/104320043
On Linux, adding a socket descriptor to epoll instance before getting
the EINPROGRESS return value from connect system call could be a root
cause of spurious on-connect events.
See golang.org/issue/8276, golang.org/issue/8426 for further information.
All credit to Jason Eggleston <jason@eggnet.com>
Fixes#8276.
Fixes#8426.
LGTM=dvyukov
R=dvyukov, golang-codereviews, adg, dave, iant, alex.brainman
CC=golang-codereviews
https://golang.org/cl/120820043
Implement the design described in:
https://docs.google.com/document/d/1v4Oqa0WwHunqlb8C3ObL_uNQw3DfSY-ztoA-4wWbKcg/pub
Summary of the changes:
GC uses "2-bits per word" pointer type info embed directly into bitmap.
Scanning of stacks/data/heap is unified.
The old spans types go away.
Compiler generates "sparse" 4-bits type info for GC (directly for GC bitmap).
Linker generates "dense" 2-bits type info for data/bss (the same as stacks use).
Summary of results:
-1680 lines of code total (-1000+ in mgc0.c only)
-25% memory consumption
-3-7% binary size
-15% GC pause reduction
-7% run time reduction
LGTM=khr
R=golang-codereviews, rsc, christoph, khr
CC=golang-codereviews, rlh
https://golang.org/cl/106260045
Because the reference time is the reference time but beginners seem
to think otherwise, make it clearer you can't choose the reference time.
LGTM=josharian, dave
R=golang-codereviews, josharian, dave
CC=golang-codereviews
https://golang.org/cl/117250044
This change causes a TLS client and server to verify that received
elliptic curve points are on the expected curve. This isn't actually
necessary in the Go TLS stack, but Watson Ladd has convinced me that
it's worthwhile because it's pretty cheap and it removes the
possibility that some change in the future (e.g. tls-unique) will
depend on it without the author checking that precondition.
LGTM=bradfitz
R=bradfitz
CC=golang-codereviews
https://golang.org/cl/115290046
ASN.1 elements can be optional, and can have a default value.
Traditionally, Go has omitted elements that are optional and that have
the zero value. I believe that's a bug (see [1]).
This change causes an optional element with a default value to only be
omitted when it has that default value. The previous behaviour of
omitting optional, zero elements with no default is retained because
it's used quite a lot and will break things if changed.
[1] https://groups.google.com/d/msg/Golang-nuts/9Ss6o9CW-Yo/KL_V7hFlyOAJFixes#7780.
R=bradfitz
LGTM=bradfitz
R=golang-codereviews, bradfitz, rsc
CC=golang-codereviews, r
https://golang.org/cl/86960045
While we're here, make it lookup the tlsfallback symbol only once.
LGTM=crawshaw
R=golang-codereviews, crawshaw, dave
CC=golang-codereviews
https://golang.org/cl/107430044
selv is created with temp() which calls tempname, which marks
the new n with EscNever, so there is no need to explicitly set
EscNone on the select descriptor.
Fixes#8396.
LGTM=dvyukov
R=golang-codereviews, dave, dvyukov
CC=golang-codereviews
https://golang.org/cl/112520043
Sweepone may be running while a new span is allocating. It
must not see the state updated while the sweepgen is unset.
Fixes#8399
LGTM=dvyukov
R=golang-codereviews, dvyukov
CC=golang-codereviews
https://golang.org/cl/118050043
This is bad for 2 reasons:
1. if the code under lock ever grows stack,
it will deadlock as stack growing acquires mheap lock.
2. It currently deadlocks with SetCPUProfileRate:
scavenger locks mheap, receives prof signal and tries to lock prof lock;
meanwhile SetCPUProfileRate locks prof lock and tries to grow stack
(presumably in runtime.unlock->futexwakeup). Boom.
Let's assume that it
Fixes#8407.
LGTM=rsc
R=golang-codereviews, rsc
CC=golang-codereviews, khr
https://golang.org/cl/112640043
With cl/112640043 TestCgoDeadlockCrash episodically print:
unexpected return pc for runtime.newstackcall
After adding debug output I see the following trace:
runtime: unexpected return pc for runtime.newstackcall called from 0xc208011b00
runtime.throw(0x414da86)
src/pkg/runtime/panic.c:523 +0x77
runtime.gentraceback(0x40165fc, 0xba440c28, 0x0, 0xc208d15200, 0xc200000000, 0xc208ddfd20, 0x20, 0x0, 0x0, 0x300)
src/pkg/runtime/traceback_x86.c:185 +0xca4
runtime.callers(0x1, 0xc208ddfd20, 0x20)
src/pkg/runtime/traceback_x86.c:438 +0x98
mcommoninit(0xc208ddfc00)
src/pkg/runtime/proc.c:369 +0x5c
runtime.allocm(0xc208052000)
src/pkg/runtime/proc.c:686 +0xa6
newm(0x4017850, 0xc208052000)
src/pkg/runtime/proc.c:933 +0x27
startm(0xc208052000, 0x100000001)
src/pkg/runtime/proc.c:1011 +0xba
wakep()
src/pkg/runtime/proc.c:1071 +0x57
resetspinning()
src/pkg/runtime/proc.c:1297 +0xa1
schedule()
src/pkg/runtime/proc.c:1366 +0x14b
runtime.gosched0(0xc20808e240)
src/pkg/runtime/proc.c:1465 +0x5b
runtime.newstack()
src/pkg/runtime/stack.c:891 +0x44d
runtime: unexpected return pc for runtime.newstackcall called from 0xc208011b00
runtime.newstackcall(0x4000cbd, 0x4000b80)
src/pkg/runtime/asm_amd64.s:278 +0x6f
I suspect that it can happen on any stack split.
So don't unwind g0 stack.
Also, that comment is lying -- we can traceback w/o mcache,
CPU profiler does that.
LGTM=rsc
R=golang-codereviews
CC=golang-codereviews, khr, rsc
https://golang.org/cl/120040043
There are fields in the Addr that do not matter for the
purpose of deciding that the same word is already
in the current literal pool. Copy only the fields that
do matter.
This came up when comparing against the Go version
because the way it is invoked doesn't copy a few fields
(like node) that are never directly used by liblink itself.
Also remove a stray print that is not well-defined in
the new liblink. (Cannot use %D outside of %P, because
%D needs the outer Prog*.)
LGTM=minux
R=minux
CC=golang-codereviews
https://golang.org/cl/119000043
This matches Go's fmt.Printf instead of ANSI C's dumb rules.
It makes the -S output from C liblink match Go's liblink.
LGTM=minux
R=minux
CC=golang-codereviews
https://golang.org/cl/112600043
They do not, but pretend that they do.
The immediate need is that it breaks the new GC because
these are weird symbols as if with pointers but not necessary
pointer aligned.
LGTM=rsc
R=golang-codereviews, dave, josharian, khr, rsc
CC=golang-codereviews, iant, khr, rlh
https://golang.org/cl/116060043
I've found this very useful for generating
good test case lists for -short mode for
the disassemblers.
Fixes#7959.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/98150043
Currently they are scanned conservatively.
But there is no reason to scan them. C world must not contain
pointers into Go heap. Moreover, we don't have enough information
to emit write barriers nor update pointers there in future.
The immediate need is that it breaks the new GC because
these are weird symbols as if with pointers but not necessary
pointer aligned.
LGTM=rsc
R=golang-codereviews, rlh, rsc
CC=golang-codereviews, iant, khr
https://golang.org/cl/117000043
In the runtime, we want to control where allocations happen.
In particular, we don't want the code implementing malloc to
itself trigger a malloc. This change prevents the compiler
from inserting mallocs on our behalf (due to escaping declarations).
This check does not trigger on the current runtime code.
Note: Composite literals are still allowed.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/105280047
So we can tell from a binary which version of
Go built it.
LGTM=minux, rsc
R=golang-codereviews, minux, khr, rsc, dave
CC=golang-codereviews
https://golang.org/cl/117040043
This is more useful than panicking, since otherwise every caller needs
to do the length check before calling; some will forget, and have a
potential submarine crasher as a result. Other implementations of this
functionality do a length check.
This is backward compatible, except if someone has written code that
relies on this panicking with different length args. However, that was
not the case before Go 1.3 either.
Updates #7304.
LGTM=agl
R=agl, minux, hanwen
CC=golang-codereviews
https://golang.org/cl/118750043
In both cases we lie to malloc about the actual size that we need.
In panic we ask for less memory than we are going to use.
In slice we ask for more memory than we are going to use
(potentially asking for a fractional number of elements).
This breaks the new GC.
LGTM=khr
R=golang-codereviews, dave, khr
CC=golang-codereviews, rsc
https://golang.org/cl/116940043
Rewrite gotos that violate Go's stricter rules.
Use uchar* instead of char* in a few places that aren't strings.
Remove dead opcross code from asm5.c.
Declare regstr (in both list6 and list8) static.
LGTM=minux, dave
R=minux, dave
CC=golang-codereviews
https://golang.org/cl/113230043
Even though pointers are 4 bytes the stack frame should be kept
a multiple of 8 bytes so that return addresses pushed on the stack
are properly aligned.
Fixes#8379.
LGTM=dvyukov, minux
R=minux, bradfitz, dvyukov, dave
CC=golang-codereviews
https://golang.org/cl/115840048
We might want to add a go/build.IsInternal(pkg string) bool
later, but this works for now.
LGTM=dave, rsc
R=rsc, dave
CC=golang-codereviews
https://golang.org/cl/113300044
These correspond to 2 and 3 word fat copies/clears on 8g, which dominate usage in the stdlib. (70% of copies and 46% of clears are for 2 or 3 words.) I missed these in CL 111350043, which added 2 and 3 word benchmarks for 6g. A follow-up CL will optimize these cases.
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/115160043
benchmark old ns/op new ns/op delta
BenchmarkSelectUncontended 220 165 -25.00%
BenchmarkSelectContended 209 161 -22.97%
BenchmarkSelectProdCons 1042 904 -13.24%
But more importantly this change will allow
to get rid of free function in runtime.
Fixes#6494.
LGTM=rsc, khr
R=golang-codereviews, rsc, dominik.honnef, khr
CC=golang-codereviews, remyoudompheng
https://golang.org/cl/107670043
This is a temporary change to see how far the
builder gets when it times out.
LGTM=aram, 0intro
R=0intro, aram
CC=golang-codereviews, mischief
https://golang.org/cl/111400043
Fix virtual address of the start of the text segment
on amd64 Plan 9.
This issue has been partially fixed in cmd/add2line,
as part of CL 106460044, but we forgot to report the
change to cmd/objdump.
In the meantime, we also fixed the textStart address
in both cmd/add2line and cmd/objdump.
LGTM=aram, ality, mischief
R=rsc, mischief, aram, ality
CC=golang-codereviews, jas
https://golang.org/cl/117920043
The CopyFat benchmarks were changed in CL 92760044. See CL 111350043 for discussion.
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/116000043
These benchmarks are important for performance. When compiling the stdlib:
* 77.1% of the calls to sgen (copyfat) are for 16 bytes; another 8.7% are for 24 bytes. (The next most common is 32 bytes, at 5.7%.)
* Over half the calls to clearfat are for 16 or 24 bytes.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews
https://golang.org/cl/111350043
Ken's standalone file server and its derivatives, like cwfs, return
error strings different from fossil when the user opens non-existent
files.
LGTM=aram, 0intro, r
R=0intro, aram, r
CC=golang-codereviews, ken
https://golang.org/cl/112420045
Also, fix a write check in writeBuf and make some bounds checks simpler.
LGTM=gri
R=golang-codereviews, adg, gri, r, minux
CC=golang-codereviews
https://golang.org/cl/113060043
updatememstats is called on both the m and g stacks.
Call into flushallmcaches correctly. flushallmcaches
can only run on the M stack.
This is somewhat temporary. once ReadMemStats is in
Go we can have all of this code M-only.
LGTM=dvyukov
R=golang-codereviews, dvyukov
CC=golang-codereviews
https://golang.org/cl/116880043
Encoder compilation must be enc-independent,
because the resulting program is reused across
different encoders.
LGTM=r
R=golang-codereviews, r
CC=golang-codereviews
https://golang.org/cl/115860043
Breaks build for FreeBSD. Probably clang related?
««« original CL description
cmd/cgo: disable inappropriate warnings when the gcc struct is empty
package main
//#cgo CFLAGS: -Wall
//void test() {}
import "C"
func main() {
C.test()
}
This code will cause gcc issuing warnings about unused variable.
This commit use offset of the second return value of
Packages.structType to detect whether the gcc struct is empty,
and if it's directly invoke the C function instead of writing an
unused code.
LGTM=dave, minux
R=golang-codereviews, iant, minux, dave
CC=golang-codereviews
https://golang.org/cl/109640045
»»»
TBR=dfc
R=dave
CC=golang-codereviews
https://golang.org/cl/114990044
package main
//#cgo CFLAGS: -Wall
//void test() {}
import "C"
func main() {
C.test()
}
This code will cause gcc issuing warnings about unused variable.
This commit use offset of the second return value of
Packages.structType to detect whether the gcc struct is empty,
and if it's directly invoke the C function instead of writing an
unused code.
LGTM=dave, minux
R=golang-codereviews, iant, minux, dave
CC=golang-codereviews
https://golang.org/cl/109640045
redo stack allocation. This is mostly the same as
the original CL with a few bug fixes.
1. add racemalloc() for stack allocations
2. fix poolalloc/poolfree to terminate free lists correctly.
3. adjust span ref count correctly.
4. don't use cache for sizes >= StackCacheSize.
Should fix bugs and memory leaks in original changelist.
««« original CL description
undo CL 104200047 / 318b04f28372
Breaks windows and race detector.
TBR=rsc
««« original CL description
runtime: stack allocator, separate from mallocgc
In order to move malloc to Go, we need to have a
separate stack allocator. If we run out of stack
during malloc, malloc will not be available
to allocate a new stack.
Stacks are the last remaining FlagNoGC objects in the
GC heap. Once they are out, we can get rid of the
distinction between the allocated/blockboundary bits.
(This will be in a separate change.)
Fixes#7468Fixes#7424
LGTM=rsc, dvyukov
R=golang-codereviews, dvyukov, khr, dave, rsc
CC=golang-codereviews
https://golang.org/cl/104200047
»»»
TBR=rsc
CC=golang-codereviews
https://golang.org/cl/101570044
»»»
LGTM=dvyukov
R=dvyukov, dave, khr, alex.brainman
CC=golang-codereviews
https://golang.org/cl/112240044
Resolves TODO for not walking all goroutines in NumGoroutines.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rsc
https://golang.org/cl/107290044
1. Add select on sync channels benchmark.
2. Make channels in BenchmarkSelectNonblock shared.
With GOMAXPROCS=1 it is the same, but with GOMAXPROCS>1
it becomes a more interesting benchmark.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews
https://golang.org/cl/115780043
Previously we had a bitmap to check whether or not a byte
appears in a string should be replaced. But we don't actually
need a separate bitmap for that purpose. Removing the bitmap
makes the code simpler.
LGTM=dave, iant, nigeltao
R=golang-codereviews, dave, gobot, nigeltao, iant, bradfitz, rsc
CC=golang-codereviews
https://golang.org/cl/110100043
Fixes#5750.
https://code.google.com/p/go/issues/detail?id=5750
os: Separate windows from posix. Implement windows support.
path/filepath: Use the same implementation as other platforms
syscall: Add/rework new APIs for Windows
LGTM=alex.brainman
R=golang-codereviews, alex.brainman, gobot, rsc, minux
CC=golang-codereviews
https://golang.org/cl/86160044
"archive/tar: reuse temporary buffer in writeHeader" introduced a
change which was supposed to help lower the number of allocations from
512 bytes for every call to writeHeader. This change broke the writing
of PAX headers.
writeHeader calls writePAXHeader and writePAXHeader calls writeHeader
again. writeHeader will end up writing the PAX header twice.
example broken header:
PaxHeaders.4007/NetLock_Arany_=Class_Gold=_Ftanstvny.crt0000000000000000000000000000007112301216634021512 xustar0000000000000000
PaxHeaders.4007/NetLock_Arany_=Class_Gold=_Ftanstvny.crt0000000000000000000000000000007112301216634021512 xustar0000000000000000
example correct header:
PaxHeaders.4290/NetLock_Arany_=Class_Gold=_Ftanstvny.crt0000000000000000000000000000007112301216634021516 xustar0000000000000000
0100644000000000000000000000270412301216634007250 0ustar0000000000000000
This commit adds a dedicated buffer for pax headers to the Writer
struct. This change increases the size of the struct by 512 bytes, but
allows tar/writer to avoid allocating 512 bytes for all written
headers and it avoids allocating 512 more bytes for pax headers.
LGTM=dsymonds
R=dsymonds, dave, iant
CC=golang-codereviews
https://golang.org/cl/110480043
The garbage collector and stack scans are good enough now.
Fixes#7446.
LGTM=r
R=r, dvyukov
CC=golang-codereviews, mdempsky, mtj
https://golang.org/cl/112870046
As written, the ! applies before the &1.
This would crash writing out missing pcdata tables
if we ever used non-contiguous IDs in a function.
We don't, but fix anyway.
LGTM=iant, minux
R=minux, iant
CC=golang-codereviews
https://golang.org/cl/117810047
DWARF says only one is necessary.
The count is preferable because it admits 0-length arrays.
Update debug/dwarf to handle either form.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/111230044
They can be large, so use a varint encoding rather than only one byte.
LGTM=iant, rsc
R=rsc, iant
CC=golang-codereviews
https://golang.org/cl/113180043
The issue is discovered during testing of a change to runtime.
Even if it is unlikely to happen, the comment can safe an hour
next person who hits it.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/116790043
I don't see how it can lead to bad things today.
But it's better to kill it before it does.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rsc
https://golang.org/cl/111130045
Make CanBackquote(invalid UTF-8) return false.
Also add two test which show that CanBackquote reports
true for strings containing a BOM.
Fixes#7572.
LGTM=r
R=golang-codereviews, bradfitz, r
CC=golang-codereviews
https://golang.org/cl/111780045
This CL adds 'dropg', which is called to drop the association
between m and its current goroutine, and it makes schedule
handle locked goroutines correctly, instead of requiring all
callers of schedule to do that.
The effect is that if you want to take over an m for, say,
garbage collection work while still allowing the current g
to run on some other m, you can do an mcall to a function
that is:
// dissociate gp
dropg();
gp->status = Gwaiting; // for ready
// put gp on run queue for others to find
runtime·ready(gp);
/* ... do other work here ... */
// done with m, let it run goroutines again
schedule();
Before this CL, the dropg() body had to be written explicitly,
and the check for lockedg before schedule had to be
written explicitly too, both of which make the code a bit
more fragile than it needs to be.
LGTM=iant
R=dvyukov, iant
CC=golang-codereviews, rlh
https://golang.org/cl/113110043
This variable allows users to select the compiler when using the
gccgo toolchain.
LGTM=rsc
R=rsc, iant, minux, aram
CC=axwalk, golang-codereviews
https://golang.org/cl/106700044
warning: /usr/go/src/liblink/asm5.c:720 set and not used: m
warning: /usr/go/src/liblink/asm5.c:807 set and not used: c
LGTM=minux
R=minux
CC=golang-codereviews
https://golang.org/cl/108570043
The debug/dwarf package cannot parse the format generated here,
but the format can be changed so it does.
After this edit, tweaking the expression defining the offset
of a struct field, the dwarf package can parse the tables (again?).
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/105710043
Fixes#6293.
Image "testdata/benchRGB-interlace.png" was generated by opening "testdata/benchRGB.png" in the editor Gimp and saving it with interlacing enabled.
Benchmark:
BenchmarkDecodeRGB 500 7014194 ns/op 37.37 MB/s
ok pkg/image/png 4.657s
BenchmarkDecodeInterlacing 100 10623241 ns/op 24.68 MB/s
ok pkg/image/png 1.339s
LGTM=nigeltao
R=nigeltao, andybons, matrixik
CC=golang-codereviews
https://golang.org/cl/102130044
(This is a patch from the pkgsrc Go package.)
LGTM=iant
R=golang-codereviews, iant, joerg.sonnenberger, dave
CC=golang-codereviews, joerg
https://golang.org/cl/108340043
Added a complement to (*File).Symbols for the dynamic symbol table.
Would be useful, for instance, if seraching for certain shared objects
compatible with certain libraries (for instance, LADSPA requires an
exported symbol "ladspa_descriptor").
Added a variable ErrNoSymbols that canonicalizes a return from
(*File).Symbols and (*File).DyanmicSymbols if the file has no symbols.
Added tests for both (*File).Symbols and (*File).DynamicSymbols;
there was never a test for (*File).Symbols at all. A small C program using
libelf, included in the test data, was used to produce the golden
symbols to compare against.
As part of the requirements for testing, (*File).Symbols and (*File).DynamicSymbols now document the order in which the symbol tables are returned (in the order the symbols appear in the file).
All tests currently pass.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/107530043
Also detect GOARCH automatically based on `uname -m`.
LGTM=crawshaw, dave, rsc
R=rsc, iant, crawshaw, dave
CC=golang-codereviews
https://golang.org/cl/111780043
Found using the vet check in CL 106370045.
This is a second attempt at CL 101670044, which omitted the deps_test change.
This adds dependencies to net/rpc:
encoding
encoding/base64
encoding/json
html
unicode/utf16
The obvious correctness and security warrants the additional dependencies.
LGTM=rsc
R=r, minux, rsc, adg
CC=golang-codereviews
https://golang.org/cl/110890043
When we've switched to 8K pages,
heap started to grow by 128K instead of 64K,
because it was implicitly assuming that pages are 4K.
Fix that and make the code more robust.
LGTM=khr
R=golang-codereviews, dave, khr
CC=golang-codereviews, rsc
https://golang.org/cl/106450044
Also remove arch-specific Go files in the Plan 9 syscall package
LGTM=0intro
R=0intro, dave
CC=ality, golang-codereviews, jas, mischief, rsc
https://golang.org/cl/112720043
size instead of abusing text symbol
cmd/addr2line needs to know the virtual address of the start
of the text segment (load address plus header size). For
this, it used the text symbol added by the linker. This is
wrong on amd64. Header size is 40 bytes, not 32 like on 386
and arm. Function alignment is 16 bytes causing text to be
at 0x200030.
debug/plan9obj now exports both the load address and the
header size; cmd/addr2line uses this new information and
doesn't rely on text anymore.
LGTM=0intro
R=0intro, gobot, ality
CC=ality, golang-codereviews, jas, mischief
https://golang.org/cl/106460044
NetlinkRIB is currently allocating a page sized slice of bytes in a
for loop and it's also calling Getpagesize() in the same for loop.
This CL changes NetlinkRIB to preallocate the page sized slice of
bytes before reaching the for loop. This reduces memory allocations
and lowers the number of calls to Getpagesize() to 1 per NetlinkRIB
call.
This CL reduces the allocated memory from 141.5 MB down to 52 MB in
a test.
LGTM=crawshaw, dave
R=dave, dsymonds, crawshaw
CC=bradfitz, dsymonds, golang-codereviews
https://golang.org/cl/110920043
Fixes#8165.
After this change, the panic is replaced by a message:
$ go build -o out ...doesntexist
warning: "...doesntexist" matched no packages
no packages to build
The motivation to return 1 exit error code is to allow -o flag
to be used to guarantee that the output binary is written to
when exit status is 0. If someone uses an import path pattern
to specify a single package and suddenly that matches no packages,
it's better to return exit code 1 instead of silently doing nothing.
This is consistent with the case when -o flag is given and multiple
packages are matched.
It's also somewhat consistent with the current behavior with the
panic, except that gave return code 2. But it's similar in
that it's also non-zero (indicating failure).
I've changed the language to be similar to output of go test
when an import path pattern matches no packages (it also has a return status of
1):
$ go test ...doesntexist
warning: "...doesntexist" matched no packages
no packages to test
LGTM=adg
R=golang-codereviews, josharian, gobot, adg
CC=golang-codereviews
https://golang.org/cl/107140043
Maxstring is not updated in the new string routines,
this makes runtime think that long strings are bogus.
Fixes#8339.
LGTM=crawshaw, iant
R=golang-codereviews, crawshaw, iant
CC=golang-codereviews, khr, rsc
https://golang.org/cl/110930043
Broke build; missing deps_test change. Will re-send the original with the appropriate fix.
««« original CL description
net/rpc: use html/template to render html
Found using the vet check in CL 106370045.
LGTM=r
R=golang-codereviews, r
CC=golang-codereviews
https://golang.org/cl/101670044
»»»
TBR=r
CC=golang-codereviews
https://golang.org/cl/110880044
The code generating the .debug_frame section emits pairs of "advance PC",
"set SP offset" pseudo-instructions. Before the fix, the PC advance comes
out before the SP setting, which means the emitted offset for a block is
actually the value at the end of the block, which is incorrect for the
block itself.
The easiest way to fix this problem is to emit the SP offset before the
PC advance.
One delicate point: the last instruction to come out is now an
"advance PC", which means that if there are padding intsructions after
the final RET, they will appear to have a non-zero offset. This is odd
but harmless because there is no legal way to have a PC in that range,
or to put it another way, if you get here the SP is certainly screwed up
so getting the wrong (virtual) frame pointer is the least of your worries.
LGTM=iant
R=rsc, iant, lvd
CC=golang-codereviews
https://golang.org/cl/112750043
Both stdout and stderr are sent to /dev/null in android
apps. Introducing fatalf allows android to implement its
own copy that sends fatal errors to __android_log_print.
LGTM=minux, dave
R=minux, dave
CC=golang-codereviews
https://golang.org/cl/108400045
Based on cl/69170045 by Elias Naur.
There are currently several schemes for acquiring a TLS
slot to save the g register. None of them appear to work
for android. The closest are linux and darwin.
Linux uses a linker TLS relocation. This is not supported
by the android linker.
Darwin uses a fixed offset, and calls pthread_key_create
until it gets the slot it wants. As the runtime loads
late in the android process lifecycle, after an
arbitrary number of other libraries, we cannot rely on
any particular slot being available.
So we call pthread_key_create, take the first slot we are
given, and put it in runtime.tlsg, which we turn into a
regular variable in cmd/ld.
Makes android/arm cgo binaries work.
LGTM=minux
R=elias.naur, minux, dave, josharian
CC=golang-codereviews
https://golang.org/cl/106380043
The code in GC that handles gp->gobuf.ctxt is wrong,
because it does not mark the ctxt object itself,
if just queues the ctxt object for scanning.
So the ctxt object can be collected as garbage.
However, Gobuf.ctxt is void*, so it's always marked and
scanned through G.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, khr, rsc
https://golang.org/cl/105490044
runtime·usleep and runtime·osyield fall back to calling an
assembly wrapper for the libc functions in the absence of a m,
so they can be called in cgo callback context.
LGTM=rsc
R=minux.ma, rsc
CC=dave, golang-codereviews
https://golang.org/cl/102620044
A temporary 512 bytes buffer is allocated for every call to
readHeader. This buffer isn't returned to the caller and it could
be reused to lower the number of memory allocations.
This CL improves it by using a pool and zeroing out the buffer before
putting it back into the pool.
benchmark old ns/op new ns/op delta
BenchmarkListFiles100k 545249903 538832687 -1.18%
benchmark old allocs new allocs delta
BenchmarkListFiles100k 2105167 2005692 -4.73%
benchmark old bytes new bytes delta
BenchmarkListFiles100k 105903472 54831527 -48.22%
This improvement is very important if your code has to deal with a lot
of tarballs which contain a lot of files.
LGTM=dsymonds
R=golang-codereviews, dave, dsymonds, bradfitz
CC=golang-codereviews
https://golang.org/cl/108240044
A temporary 512 bytes buffer is allocated for every call to
writeHeader. This buffer could be reused the lower the number
of memory allocations.
benchmark old ns/op new ns/op delta
BenchmarkWriteFiles100k 634622051 583810847 -8.01%
benchmark old allocs new allocs delta
BenchmarkWriteFiles100k 2701920 2602621 -3.68%
benchmark old bytes new bytes delta
BenchmarkWriteFiles100k 115383884 64349922 -44.23%
This change is very important if your code has to write a lot of
tarballs with a lot of files.
LGTM=dsymonds
R=golang-codereviews, dave, dsymonds
CC=golang-codereviews
https://golang.org/cl/107440043
Thanks to Cedric Staub for noting that a short session key would lead
to an out-of-bounds access when conditionally copying the too short
buffer over the random session key.
LGTM=davidben, bradfitz
R=davidben, bradfitz
CC=golang-codereviews
https://golang.org/cl/102670044
The main changes fall into a few patterns:
1. Replace #define with enum.
2. Add /*c2go */ comment giving effect of #define.
This is necessary for function-like #defines and
non-enum-able #defined constants.
(Not all compilers handle negative or large enums.)
3. Add extra braces in struct initializer.
(c2go does not implement the full rules.)
This is enough to let c2go typecheck the source tree.
There may be more changes once it is doing
other semantic analyses.
LGTM=minux, iant
R=minux, dave, iant
CC=golang-codereviews
https://golang.org/cl/106860045
A TLS slot is reserved by _rt0_.*_plan9 as an automatic and
its address (which is static on Plan 9) is saved in the
global _privates symbol. The startup linkage now is exactly
like that from Plan 9 libc, and the way we access g is
exactly as if we'd have used privalloc(2).
Aside from making the code more standard, this change
drastically simplifies it, both for 386 and for amd64, and
makes the Plan 9 code in liblink common for both 386 and
amd64.
The amd64 runtime code was cleared of nxm assumptions, and
now runs on the standard Plan 9 kernel.
Note handling fixes will follow in a separate CL.
LGTM=rsc
R=golang-codereviews, rsc, bradfitz, dave
CC=0intro, ality, golang-codereviews, jas, minux.ma, mischief
https://golang.org/cl/101510049
We restored registers correctly in the usual case where the thread
is a Go-managed thread and called runtime·sighandler, but we
failed to do so when runtime·sigtramp was called on a cgo-created
thread. In that case, runtime·sigtramp called runtime·badsignal,
a Go function, and did not restore registers after it returned
LGTM=rsc, dave
R=rsc, dave
CC=golang-codereviews, minux.ma
https://golang.org/cl/105280050
On amd64, the real time is reduced from 176.76s to 140.26s.
On ARM, the real time is reduced from 921.61s to 726.30s.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/101580043
Move decAlloc calls a bit higher in the call tree.
Cleans code marginally, improves speed marginally.
The benchmarks are noisy but the median time from
20 consective 1-second runs improves by about 2%.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/105530043
3-index slices of the form s[:len(s):len(s)]
cannot be simplified to s[::len(s)].
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/108330043
There is no reason to generate different code for cap and len.
Fixes#8025.
Fixes#8026.
LGTM=rsc
R=rsc, iant, khr
CC=golang-codereviews
https://golang.org/cl/93570044
Breaks windows and race detector.
TBR=rsc
««« original CL description
runtime: stack allocator, separate from mallocgc
In order to move malloc to Go, we need to have a
separate stack allocator. If we run out of stack
during malloc, malloc will not be available
to allocate a new stack.
Stacks are the last remaining FlagNoGC objects in the
GC heap. Once they are out, we can get rid of the
distinction between the allocated/blockboundary bits.
(This will be in a separate change.)
Fixes#7468Fixes#7424
LGTM=rsc, dvyukov
R=golang-codereviews, dvyukov, khr, dave, rsc
CC=golang-codereviews
https://golang.org/cl/104200047
»»»
TBR=rsc
CC=golang-codereviews
https://golang.org/cl/101570044
In order to move malloc to Go, we need to have a
separate stack allocator. If we run out of stack
during malloc, malloc will not be available
to allocate a new stack.
Stacks are the last remaining FlagNoGC objects in the
GC heap. Once they are out, we can get rid of the
distinction between the allocated/blockboundary bits.
(This will be in a separate change.)
Fixes#7468Fixes#7424
LGTM=rsc, dvyukov
R=golang-codereviews, dvyukov, khr, dave, rsc
CC=golang-codereviews
https://golang.org/cl/104200047
The old code's structure needed to track indirections because of the
use of unsafe. That is no longer necessary, so we can remove all
that tracking. The code cleans up considerably but is a little slower.
We may be able to recover that performance drop. I believe the
code quality improvement is worthwhile regardless.
BenchmarkEndToEndPipe 5610 5780 +3.03%
BenchmarkEndToEndByteBuffer 3156 3222 +2.09%
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/103700043
If the actual types of two reflect values are
the same and the values are structs, they must
have the same number of fields.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/108280043
This removes a major unsafe thorn in our side, a perennial obstacle
to clean garbage collection.
Not coincidentally: In cleaning this up, several bugs were found,
including code that reached inside by-value interfaces to create
pointers for pointer-receiver methods. Unsafe code is just as
advertised.
Performance of course suffers, but not too badly. The Pipe number
is more indicative, since it's doing I/O that simulates a network
connection. Plus these are end-to-end, so each end suffers
only half of this pain.
The edit is pretty much a line-by-line conversion, with a few
simplifications and a couple of new tests. There may be more
performance to gain.
BenchmarkEndToEndByteBuffer 2493 3033 +21.66%
BenchmarkEndToEndPipe 4953 5597 +13.00%
Fixes#5159.
LGTM=rsc
R=rsc
CC=golang-codereviews, khr
https://golang.org/cl/102680045
The only text that describes the accepted format is in the package doc,
which is far away from these functions. The other flag types don't need
this explicitness because they are more obvious.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/101550043
Remove GC bitmap backward scanning.
This was already done once in https://golang.org/cl/5530074/
Still makes GC a bit faster.
On the garbage benchmark, before:
gc-pause-one=237345195
gc-pause-total=4746903
cputime=32427775
time=32458208
after:
gc-pause-one=235484019
gc-pause-total=4709680
cputime=31861965
time=31877772
Also prepares mgc0.c for future changes.
R=golang-codereviews, khr, khr
CC=golang-codereviews, rsc
https://golang.org/cl/105380043
newproc takes two extra pointers, not two extra registers.
On amd64p32 (nacl) they are different.
We diagnosed this before the 1.3 cut but the tree was frozen.
I believe this is causing the random problems on the builder.
Fixes#8199.
TBR=r
CC=golang-codereviews
https://golang.org/cl/102710043
Include these files in the build,
even though they don't get executed.
LGTM=r
R=golang-codereviews, r
CC=golang-codereviews
https://golang.org/cl/108180043
Output number of spinning threads,
this is useful to understanding whether the scheduler
is in a steady state or not.
R=golang-codereviews, khr
CC=golang-codereviews, rsc
https://golang.org/cl/103540045
Say when a goroutine is locked to OS thread in crash reports
and goroutine profiles.
It can be useful to understand what goroutines consume OS threads
(syscall and locked), e.g. if you forget to call UnlockOSThread
or leak locked goroutines.
R=golang-codereviews
CC=golang-codereviews, rsc
https://golang.org/cl/94170043
Pending acceptance of CL 101500044
and adjustment of test/fixedbugs/bug299.go.
LGTM=adonovan
R=golang-codereviews, adonovan
CC=golang-codereviews
https://golang.org/cl/110160043
The runtime has historically held two dedicated values g (current goroutine)
and m (current thread) in 'extern register' slots (TLS on x86, real registers
backed by TLS on ARM).
This CL removes the extern register m; code now uses g->m.
On ARM, this frees up the register that formerly held m (R9).
This is important for NaCl, because NaCl ARM code cannot use R9 at all.
The Go 1 macrobenchmarks (those with per-op times >= 10 µs) are unaffected:
BenchmarkBinaryTree17 5491374955 5471024381 -0.37%
BenchmarkFannkuch11 4357101311 4275174828 -1.88%
BenchmarkGobDecode 11029957 11364184 +3.03%
BenchmarkGobEncode 6852205 6784822 -0.98%
BenchmarkGzip 650795967 650152275 -0.10%
BenchmarkGunzip 140962363 141041670 +0.06%
BenchmarkHTTPClientServer 71581 73081 +2.10%
BenchmarkJSONEncode 31928079 31913356 -0.05%
BenchmarkJSONDecode 117470065 113689916 -3.22%
BenchmarkMandelbrot200 6008923 5998712 -0.17%
BenchmarkGoParse 6310917 6327487 +0.26%
BenchmarkRegexpMatchMedium_1K 114568 114763 +0.17%
BenchmarkRegexpMatchHard_1K 168977 169244 +0.16%
BenchmarkRevcomp 935294971 914060918 -2.27%
BenchmarkTemplate 145917123 148186096 +1.55%
Minux previous reported larger variations, but these were caused by
run-to-run noise, not repeatable slowdowns.
Actual code changes by Minux.
I only did the docs and the benchmarking.
LGTM=dvyukov, iant, minux
R=minux, josharian, iant, dave, bradfitz, dvyukov
CC=golang-codereviews
https://golang.org/cl/109050043
A single iteration of BenchmarkSaveRestore runs for 5 seconds
on my freebsd machine. 5 seconds looks like too long for a single
iteration.
This is the only benchmark that times out on freebsd-amd64-race builder.
R=golang-codereviews, dave
CC=golang-codereviews
https://golang.org/cl/107340044
Breaks the build
««« original CL description
cmd/go: build test files containing non-runnable examples
Even if we can't run them, we should at least check that they compile.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/107320046
»»»
TBR=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/110140044
Runs for 4 seconds on my mac.
Also this is the only test that times out on freebsd in -race mode.
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/110150045
Even if we can't run them, we should at least check that they compile.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/107320046
This CL re-applies the tests added in CL 101330053 and subsequently rolled back in CL 102610043.
The original author of this change was Rui Ueyama <ruiu@google.com>
LGTM=r, ruiu
R=ruiu, r
CC=golang-codereviews
https://golang.org/cl/109170043
The number of estimated iterations required to reach the benchtime is multiplied by a safety margin (to avoid falling just short) and then rounded up to a readable number. With an accurate estimate, in the worse case, the resulting number of iterations could be 3.75x more than necessary: 1.5x for safety * 2.5x to round up (e.g. from 2eX+1 to 5eX).
This CL reduces the safety margin to 1.2x. Experimentation showed a diminishing margin of return past 1.2x, although the average case continued to show improvements down to 1.05x.
This CL also reduces the maximum round-up multiplier from 2.5x (from 2eX+1 to 5eX) to 2x, by allowing the number of iterations to be of the form 3eX.
Both changes improve benchmark wall clock times, and the effects are cumulative.
From 1.5x to 1.2x safety margin:
package old s new s delta
bytes 163 125 -23%
encoding/json 27 21 -22%
net/http 42 36 -14%
runtime 463 418 -10%
strings 82 65 -21%
Allowing 3eX iterations:
package old s new s delta
bytes 163 134 -18%
encoding/json 27 23 -15%
net/http 42 36 -14%
runtime 463 422 -9%
strings 82 72 -12%
Combined:
package old s new s delta
bytes 163 112 -31%
encoding/json 27 20 -26%
net/http 42 30 -29%
runtime 463 346 -25%
strings 82 60 -27%
LGTM=crawshaw, r, rsc
R=golang-codereviews, crawshaw, r, rsc
CC=golang-codereviews
https://golang.org/cl/105990045
The previous call to parseRange already checks whether
all the ranges start before the end of file.
LGTM=robert.hencke, bradfitz
R=golang-codereviews, robert.hencke, gobot, bradfitz
CC=golang-codereviews
https://golang.org/cl/91880044
Update #1435
This proposal disables Setuid and Setgid on all linux platforms.
Issue 1435 has been open for a long time, and it is unlikely to be addressed soon so an argument was made by a commenter
https://code.google.com/p/go/issues/detail?id=1435#c45
That these functions should made to fail rather than succeed in their broken state.
LGTM=ruiu, iant
R=iant, ruiu
CC=golang-codereviews
https://golang.org/cl/106170043
MOV with SSE registers seems faster than REP MOVSQ if the
size being copied is less than about 2K. Previously we
didn't use MOV if the memory region is larger than 256
byte. This patch improves the performance of 257 ~ 2048
byte non-overlapping copy by using MOV.
Here is the benchmark result on Intel Xeon 3.5GHz (Nehalem).
benchmark old ns/op new ns/op delta
BenchmarkMemmove16 4 4 +0.42%
BenchmarkMemmove32 5 5 -0.20%
BenchmarkMemmove64 6 6 -0.81%
BenchmarkMemmove128 7 7 -0.82%
BenchmarkMemmove256 10 10 +1.92%
BenchmarkMemmove512 29 16 -44.90%
BenchmarkMemmove1024 37 25 -31.55%
BenchmarkMemmove2048 55 44 -19.46%
BenchmarkMemmove4096 92 91 -0.76%
benchmark old MB/s new MB/s speedup
BenchmarkMemmove16 3370.61 3356.88 1.00x
BenchmarkMemmove32 6368.68 6386.99 1.00x
BenchmarkMemmove64 10367.37 10462.62 1.01x
BenchmarkMemmove128 17551.16 17713.48 1.01x
BenchmarkMemmove256 24692.81 24142.99 0.98x
BenchmarkMemmove512 17428.70 31687.72 1.82x
BenchmarkMemmove1024 27401.82 40009.45 1.46x
BenchmarkMemmove2048 36884.86 45766.98 1.24x
BenchmarkMemmove4096 44295.91 44627.86 1.01x
LGTM=khr
R=golang-codereviews, gobot, khr
CC=golang-codereviews
https://golang.org/cl/90500043
sync.Pool is not supposed to be used everywhere, but is
a last resort.
««« original CL description
strings: use sync.Pool to cache buffer
benchmark old ns/op new ns/op delta
BenchmarkByteReplacerWriteString 3596 3094 -13.96%
benchmark old allocs new allocs delta
BenchmarkByteReplacerWriteString 1 0 -100.00%
LGTM=dvyukov
R=bradfitz, dave, dvyukov
CC=golang-codereviews
https://golang.org/cl/101330053
»»»
LGTM=dave
R=r, dave
CC=golang-codereviews
https://golang.org/cl/102610043
This requires minimal changes to the runtime hooks. In particular,
synchronization events must be done only on valid addresses now,
so I've added the additional checks to race.c.
LGTM=iant
R=iant
CC=golang-codereviews
https://golang.org/cl/101000046
benchmark old ns/op new ns/op delta
BenchmarkByteReplacerWriteString 7359 3661 -50.25%
LGTM=dave
R=golang-codereviews, dave
CC=golang-codereviews
https://golang.org/cl/102550043
Afterprologue check was required when did not know
about return arguments of functions and/or they were not zeroed.
Now 100% precision is required for stacks due to stack copying,
so it must work w/o afterprologue one way or another.
I can limit this change for 1.3 to merely adding a TODO,
but this check is super confusing so I don't want this knowledge to get lost.
LGTM=rsc
R=golang-codereviews, gobot, rsc, khr
CC=golang-codereviews, khr, rsc
https://golang.org/cl/96580045
Use WriteString instead of allocating a byte slice as a
buffer. This was a TODO.
benchmark old ns/op new ns/op delta
BenchmarkWriteString 40139 19991 -50.20%
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/107190044
requires a decoder to do its own byte buffering instead of using
bufio.Reader, due to byte stuffing.
benchmark old MB/s new MB/s speedup
BenchmarkDecodeBaseline 33.40 50.65 1.52x
BenchmarkDecodeProgressive 24.34 31.92 1.31x
On 6g, unsafe.Sizeof(huffman{}) falls from 4872 to 964 bytes, and
the decoder struct contains 8 of those.
LGTM=r
R=r, nightlyone
CC=bradfitz, couchmoney, golang-codereviews, raph
https://golang.org/cl/109050045
Storing temporary values to a slice is slower than storing
them to local variables of type byte.
benchmark old MB/s new MB/s speedup
BenchmarkEncodeToStringBase32 102.21 156.66 1.53x
BenchmarkEncodeToStringBase64 124.25 177.91 1.43x
LGTM=crawshaw
R=golang-codereviews, crawshaw, bradfitz, dave
CC=golang-codereviews
https://golang.org/cl/109820045
Just to be more thorough.
No need to push this to 1.3; it's just a test change that
worked without any changes to the code being tested.
LGTM=crawshaw
R=golang-codereviews, crawshaw
CC=golang-codereviews
https://golang.org/cl/109080045
genericReplacer.lookup is called for each byte of an input
string. In many (most?) cases, lookup will fail for the first
byte, and it will return immediately. Adding a fast path for
that case seems worth it.
Benchmark on my Xeon 3.5GHz Linux box:
benchmark old ns/op new ns/op delta
BenchmarkGenericNoMatch 2691 774 -71.24%
BenchmarkGenericMatch1 7920 8151 +2.92%
BenchmarkGenericMatch2 52336 39927 -23.71%
BenchmarkSingleMaxSkipping 1575 1575 +0.00%
BenchmarkSingleLongSuffixFail 1429 1429 +0.00%
BenchmarkSingleMatch 56228 55444 -1.39%
BenchmarkByteByteNoMatch 568 568 +0.00%
BenchmarkByteByteMatch 977 972 -0.51%
BenchmarkByteStringMatch 1669 1687 +1.08%
BenchmarkHTMLEscapeNew 422 422 +0.00%
BenchmarkHTMLEscapeOld 692 670 -3.18%
BenchmarkByteByteReplaces 8492 8474 -0.21%
BenchmarkByteByteMap 2817 2808 -0.32%
LGTM=rsc
R=golang-codereviews, bradfitz, dave, rsc
CC=golang-codereviews
https://golang.org/cl/79200044
Bug was introduced recently. Add more tests, fix the bugs.
Suppress + sign when not required in zero padding.
Do not zero pad infinities.
All old tests still pass.
This time for sure!
Fixes#8217.
LGTM=rsc
R=golang-codereviews, dan.kortschak, rsc
CC=golang-codereviews
https://golang.org/cl/103480043
The go:nosplit change wasn't the problem, reinstating.
««« original CL description
undo CL 93380044 / 7f0999348917
Partial undo, just of go:nosplit annotation. Somehow it
is breaking the windows builders.
TBR=bradfitz
««« original CL description
runtime: implement string ops in Go
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
»»»
TBR=bradfitz
CC=golang-codereviews
https://golang.org/cl/105260044
»»»
TBR=bradfitz
R=bradfitz, golang-codereviews
CC=golang-codereviews
https://golang.org/cl/103490043
Partial undo, just of go:nosplit annotation. Somehow it
is breaking the windows builders.
TBR=bradfitz
««« original CL description
runtime: implement string ops in Go
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
»»»
TBR=bradfitz
CC=golang-codereviews
https://golang.org/cl/105260044
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
We don't need to shift array elements to shuffle them.
We just have to swap a selected element with 0th element.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/91750044
Printf("%x", "abc") was "0x610x620x63"; is now "0x616263", which
is surely better.
Printf("% #x", "abc") is still "0x61 0x62 0x63".
Fixes#8080.
LGTM=bradfitz, gri
R=golang-codereviews, bradfitz, gri
CC=golang-codereviews
https://golang.org/cl/106990043
Also added a test to verify os.Getppid() works across all platforms
LGTM=alex.brainman
R=golang-codereviews, alex.brainman, shreveal, iant
CC=golang-codereviews
https://golang.org/cl/102320044
Reportedly in the Linux 3.16 kernel the VDSO will not have
section headers or a normal symbol table.
Too late for 1.3 but perhaps for 1.3.1, if there is one.
Fixes#8197.
LGTM=rsc
R=golang-codereviews, mattn.jp, rsc
CC=golang-codereviews
https://golang.org/cl/101260044
bufio.Scanner.Scan returns whether the scan succeeded, not whether it
is done, so the test was mistakenly breaking early.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/93670045
makes windows-amd64-race benchmarks slower
««« original CL description
testing: make benchmarking faster
Allow the number of benchmark iterations to grow faster for fast benchmarks, and don't round up twice.
Using the default benchtime, this CL reduces wall clock time to run benchmarks:
net/http 49s -> 37s (-24%)
runtime 8m31s -> 5m55s (-30%)
bytes 2m37s -> 1m29s (-43%)
encoding/json 29s -> 21s (-27%)
strings 1m16s -> 53s (-30%)
LGTM=crawshaw
R=golang-codereviews, crawshaw
CC=golang-codereviews
https://golang.org/cl/101970047
»»»
TBR=josharian
CC=golang-codereviews
https://golang.org/cl/105950044
It appears that something about Go on Windows
cannot handle the fault cause by a jump to address 0.
The way Go represents and calls functions, this
never happened at all, until CL 105140044.
This CL changes the code added in CL 105140044
to make jump to 0 impossible once again.
Fixes#8047. (again, on Windows)
TBR=bradfitz
R=golang-codereviews, dave
CC=adg, golang-codereviews, iant, r
https://golang.org/cl/105120044
jmpdefer modifies PC, SP, and LR, and not atomically,
so walking past jmpdefer will often end up in a state
where the three are not a consistent execution snapshot.
This was causing warning messages a few frames later
when the traceback realized it was confused, but given
the right memory it could easily crash instead.
Update #8153
LGTM=minux, iant
R=golang-codereviews, minux, iant
CC=golang-codereviews, r
https://golang.org/cl/107970043
The test requires that timerproc runs, but busy loops and starves
the scheduler so that, with high probability, timerproc doesn't run.
Avoid the issue by expecting the test to succeed; if not, a major
outside timeout will kill it and let us know.
As you can see from the diffs, there have been several attempts to
fix this with chicanery, but none has worked. Don't bother trying
any more.
Fixes#8136.
LGTM=rsc
R=rsc, josharian
CC=golang-codereviews
https://golang.org/cl/105140043
Allow the number of benchmark iterations to grow faster for fast benchmarks, and don't round up twice.
Using the default benchtime, this CL reduces wall clock time to run benchmarks:
net/http 49s -> 37s (-24%)
runtime 8m31s -> 5m55s (-30%)
bytes 2m37s -> 1m29s (-43%)
encoding/json 29s -> 21s (-27%)
strings 1m16s -> 53s (-30%)
LGTM=crawshaw
R=golang-codereviews, crawshaw
CC=golang-codereviews
https://golang.org/cl/101970047
Call copy with as large buffer as possible to reduce the
number of function calls.
benchmark old ns/op new ns/op delta
BenchmarkBytesRepeat 540 162 -70.00%
BenchmarkStringsRepeat 563 177 -68.56%
LGTM=josharian
R=golang-codereviews, josharian, dave, dvyukov
CC=golang-codereviews
https://golang.org/cl/90550043
Previously, an input string was stripped of newline
characters at the beginning of DecodeString and then passed
to Decode. Decode again tried to strip newline characters.
That's waste of time.
benchmark old MB/s new MB/s speedup
BenchmarkDecodeString 38.37 65.20 1.70x
LGTM=dave, bradfitz
R=golang-codereviews, dave, bradfitz
CC=golang-codereviews
https://golang.org/cl/91770051
There is a hierarchy of location defined by loop depth:
-1 = the heap
0 = function results
1 = local variables (and parameters)
2 = local variable declared inside a loop
3 = local variable declared inside a loop inside a loop
etc
In general if an address from loopdepth n is assigned to
something in loop depth m < n, that indicates an extended
lifetime of some form that requires a heap allocation.
Function results can be local variables too, though, and so
they don't actually fit into the hierarchy very well.
Treat the address of a function result as level 1 so that
if it is written back into a result, the address is treated
as escaping.
Fixes#8185.
LGTM=iant
R=iant
CC=golang-codereviews
https://golang.org/cl/108870044
Provide Nextafter64 as alias to Nextafter.
For submission after the 1.3 release.
Fixes#8117.
LGTM=adonovan
R=adonovan
CC=golang-codereviews
https://golang.org/cl/101750048
The analysis for &x was using the loop depth on x set
during x's declaration. A type switch creates a list of
implicit declarations that were not getting initialized
with loop depths.
Fixes#8176.
LGTM=iant
R=iant
CC=golang-codereviews
https://golang.org/cl/108860043
The putpclcdelta function set the DWARF line number PC to
s->value + pcline->pc, which is correct, but the code then set
the local variable pc to epc, which can be a different value.
This caused the next delta in the DWARF table to be wrong.
Fixes#8098.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/104950045
A runtime.Goexit during a panic-invoked deferred call
left the panic stack intact even though all the stack frames
are gone when the goroutine is torn down.
The next goroutine to reuse that struct will have a
bogus panic stack and can cause the traceback routines
to walk into garbage.
Most likely to happen during tests, because t.Fatal might
be called during a deferred func and uses runtime.Goexit.
This "not enough cleared in Goexit" failure mode has
happened to us multiple times now. Clear all the pointers
that don't make sense to keep, not just gp->panic.
Fixes#8158.
LGTM=iant, dvyukov
R=iant, dvyukov
CC=golang-codereviews
https://golang.org/cl/102220043
I am not sure what the rounding here was
trying to do, but it was skipping the first
pointer on native client.
The code above the rounding already checks
that xoffset is widthptr-aligned, so the rnd
was a no-op everywhere but on Native Client.
And on Native Client it was wrong.
Perhaps it was supposed to be rounding down,
not up, but zerorange handles the extra 32 bits
correctly, so the rnd does not seem to be necessary
at all.
This wouldn't be worth doing for Go 1.3 except
that it can affect code on the playground.
Fixes#8155.
LGTM=r, iant
R=golang-codereviews, r, iant
CC=dvyukov, golang-codereviews, khr
https://golang.org/cl/108740047
It's not clear how widespread this issue is, but we do have a
test case generated by a development version of clang.
I don't know whether this should go into 1.3 or not; happy to
hear arguments either way.
LGTM=rsc
R=golang-codereviews, bradfitz, rsc
CC=golang-codereviews
https://golang.org/cl/96680045
I introduced this bug when I changed the escape
analysis to run in phases based on call graph
dependency order, in order to be more precise about
inputs escaping back to outputs (functions returning
their arguments).
Given
func f(z **int) *int { return *z }
we were tagging the function as 'z does not escape
and is not returned', which is all true, but not
enough information.
If used as:
var x int
p := &x
q := &p
leak(f(q))
then the compiler might try to keep x, p, and q all
on the stack, since (according to the recorded
information) nothing interesting ends up being
passed to leak.
In fact since f returns *q = p, &x is passed to leak
and x needs to be heap allocated.
To trigger the bug, you need a chain that the
compiler wants to keep on the stack (like x, p, q
above), and you need a function that returns an
indirect of its argument, and you need to pass the
head of the chain to that function. This doesn't
come up very often: this bug has been present since
June 2012 (between Go 1 and Go 1.1) and we haven't
seen it until now. It helps that most functions that
return indirects are getters that are simple enough
to be inlined, avoiding the bug.
Earlier versions of Go also had the benefit that if
&x really wasn't used beyond x's lifetime, nothing
broke if you put &x in a heap-allocated structure
accidentally. With the new stack copying, though,
heap-allocated structures containing &x are not
updated when the stack is copied and x moves,
leading to crashes in Go 1.3 that were not crashes
in Go 1.2 or Go 1.1.
The fix is in two parts.
First, in the analysis of a function, recognize when
a value obtained via indirect of a parameter ends up
being returned. Mark those parameters as having
content escape back to the return results (but we
don't bother to write down which result).
Second, when using the analysis to analyze, say,
f(q), mark parameters with content escaping as
having any indirections escape to the heap. (We
don't bother trying to match the content to the
return value.)
The fix could be less precise (simpler).
In the first part we might mark all content-escaping
parameters as plain escaping, and then the second
part could be dropped. Or we might assume that when
calling f(q) all the things pointed at by q escape
always (for any f and q).
The fix could also be more precise (more complex).
We might record the specific mapping from parameter
to result along with the number of indirects from the
parameter to the thing being returned as the result,
and then at the call sites we could set up exactly the
right graph for the called function. That would make
notleaks(f(q)) be able to keep x on the stack, because
the reuslt of f(q) isn't passed to anything that leaks it.
The less precise the fix, the more stack allocations
become heap allocations.
This fix is exactly as precise as it needs to be so that
none of the current stack allocations in the standard
library turn into heap allocations.
Fixes#8120.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews, khr, r
https://golang.org/cl/102040046
The 'address taken' bit in a function variable was not
propagating into the inlined copies, causing incorrect
liveness information.
LGTM=dsymonds, bradfitz
R=golang-codereviews, bradfitz
CC=dsymonds, golang-codereviews, iant, khr, r
https://golang.org/cl/96670046
The 1-byte write was silently clearing a byte on the stack.
If there was another function call with more arguments
in the same stack frame, no harm done.
Otherwise, if the variable at that location was already zero,
no harm done.
Otherwise, problems.
Fixes#8139.
LGTM=dsymonds
R=golang-codereviews, dsymonds
CC=golang-codereviews, iant, r
https://golang.org/cl/100940043
This is a workaround - the code should be better than this - but the
fix avoids generating large numbers of linehist entries for the wrapper
functions that enable interface conversions. There can be many of
them, they all happen at the end of compilation, and they can all
share a linehist entry.
Avoids bad n^2 behavior in liblink.
Test case in issue 8135 goes from 64 seconds to 2.5 seconds (still bad
but not intolerable).
Fixes#8135.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/104840043
If we see a typedef to an anonymous struct more than once,
presumably in two different Go files that import "C", use the
same Go type name.
Fixes#8133.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/102080043
Right now, any revision on the default branch after go1.3beta2 is
described by "go verson" as go1.3beta2 plus some revision.
That's OK for now, but once go1.3 is released, that will seem wrong.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/98650046
We were requiring that the defer stack and the panic stack
be completely processed, thinking that if any were left over
the stack scan and the defer stack/panic stack must be out
of sync. It turns out that the panic stack may well have
leftover entries in some situations, and that's okay.
Fixes#8132.
LGTM=minux, r
R=golang-codereviews, minux, r
CC=golang-codereviews, iant, khr
https://golang.org/cl/100900044
C globals are conservatively scanned. This helps
avoid false retention, especially for 32 bit.
LGTM=rsc
R=golang-codereviews, khr, rsc
CC=golang-codereviews
https://golang.org/cl/102040043
The 'continuation pc' is where the frame will continue
execution, if anywhere. For a frame that stopped execution
due to a CALL instruction, the continuation pc is immediately
after the CALL. But for a frame that stopped execution due to
a fault, the continuation pc is the pc after the most recent CALL
to deferproc in that frame, or else 0. That is where execution
will continue, if anywhere.
The liveness information is only recorded for CALL instructions.
This change makes sure that we never look for liveness information
except for CALL instructions.
Using a valid PC fixes crashes when a garbage collection or
stack copying tries to process a stack frame that has faulted.
Record continuation pc in heapdump (format change).
Fixes#8048.
LGTM=iant, khr
R=khr, iant, dvyukov
CC=golang-codereviews, r
https://golang.org/cl/100870044
Update #2675
The code here was using the error check for Linux/386,
not the one for FreeBSD/386. Most of the time it worked.
Thanks to Neel Natu (FreeBSD developer) for finding this.
The s/JCC/JAE/ a few lines later is a no-op but makes the
test match the rest of the file. Why we write JAE instead of JCC
I don't know, but the two are equivalent and the file might
as well be consistent.
LGTM=bradfitz, minux
R=golang-codereviews, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/99680044
This CL forces the optimizer to preserve some memory stores
that would be redundant except that a stack scan due to garbage
collection or stack copying might look at them during a function call.
As such, it forces additional memory writes and therefore slows
down the execution of some programs, especially garbage-heavy
programs that are already limited by memory bandwidth.
The slowdown can be as much as 7% for end-to-end benchmarks.
These numbers are from running go1.test -test.benchtime=5s three times,
taking the best (lowest) ns/op for each benchmark. I am excluding
benchmarks with time/op < 10us to focus on macro effects.
All benchmarks are on amd64.
Comparing tip (a27f34c771cb) against this CL on an Intel Core i5 MacBook Pro:
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 3876500413 3856337341 -0.52%
BenchmarkFannkuch11 2965104777 2991182127 +0.88%
BenchmarkGobDecode 8563026 8788340 +2.63%
BenchmarkGobEncode 5050608 5267394 +4.29%
BenchmarkGzip 431191816 434168065 +0.69%
BenchmarkGunzip 107873523 110563792 +2.49%
BenchmarkHTTPClientServer 85036 86131 +1.29%
BenchmarkJSONEncode 22143764 22501647 +1.62%
BenchmarkJSONDecode 79646916 85658808 +7.55%
BenchmarkMandelbrot200 4720421 4700108 -0.43%
BenchmarkGoParse 4651575 4712247 +1.30%
BenchmarkRegexpMatchMedium_1K 71986 73490 +2.09%
BenchmarkRegexpMatchHard_1K 111018 117495 +5.83%
BenchmarkRevcomp 648798723 659352759 +1.63%
BenchmarkTemplate 112673009 112819078 +0.13%
Comparing tip (a27f34c771cb) against this CL on an Intel Xeon E5520:
BenchmarkBinaryTree17 5461110720 5393104469 -1.25%
BenchmarkFannkuch11 4314677151 4327177615 +0.29%
BenchmarkGobDecode 11065853 11235272 +1.53%
BenchmarkGobEncode 6500065 6959837 +7.07%
BenchmarkGzip 647478596 671769097 +3.75%
BenchmarkGunzip 139348579 141096376 +1.25%
BenchmarkHTTPClientServer 69376 73610 +6.10%
BenchmarkJSONEncode 30172320 31796106 +5.38%
BenchmarkJSONDecode 113704905 114239137 +0.47%
BenchmarkMandelbrot200 6032730 6003077 -0.49%
BenchmarkGoParse 6775251 6405995 -5.45%
BenchmarkRegexpMatchMedium_1K 111832 113895 +1.84%
BenchmarkRegexpMatchHard_1K 161112 168420 +4.54%
BenchmarkRevcomp 876363406 892319935 +1.82%
BenchmarkTemplate 146273096 148998339 +1.86%
Just to get a sense of where we are compared to the previous release,
here are the same benchmarks comparing Go 1.2 to this CL.
Comparing Go 1.2 against this CL on an Intel Core i5 MacBook Pro:
BenchmarkBinaryTree17 4370077662 3856337341 -11.76%
BenchmarkFannkuch11 3347052657 2991182127 -10.63%
BenchmarkGobDecode 8791384 8788340 -0.03%
BenchmarkGobEncode 4968759 5267394 +6.01%
BenchmarkGzip 437815669 434168065 -0.83%
BenchmarkGunzip 94604099 110563792 +16.87%
BenchmarkHTTPClientServer 87798 86131 -1.90%
BenchmarkJSONEncode 22818243 22501647 -1.39%
BenchmarkJSONDecode 97182444 85658808 -11.86%
BenchmarkMandelbrot200 4733516 4700108 -0.71%
BenchmarkGoParse 5054384 4712247 -6.77%
BenchmarkRegexpMatchMedium_1K 67612 73490 +8.69%
BenchmarkRegexpMatchHard_1K 107321 117495 +9.48%
BenchmarkRevcomp 733270055 659352759 -10.08%
BenchmarkTemplate 109304977 112819078 +3.21%
Comparing Go 1.2 against this CL on an Intel Xeon E5520:
BenchmarkBinaryTree17 5986953594 5393104469 -9.92%
BenchmarkFannkuch11 4861139174 4327177615 -10.98%
BenchmarkGobDecode 11830997 11235272 -5.04%
BenchmarkGobEncode 6608722 6959837 +5.31%
BenchmarkGzip 661875826 671769097 +1.49%
BenchmarkGunzip 138630019 141096376 +1.78%
BenchmarkHTTPClientServer 71534 73610 +2.90%
BenchmarkJSONEncode 30393609 31796106 +4.61%
BenchmarkJSONDecode 139645860 114239137 -18.19%
BenchmarkMandelbrot200 5988660 6003077 +0.24%
BenchmarkGoParse 6974092 6405995 -8.15%
BenchmarkRegexpMatchMedium_1K 111331 113895 +2.30%
BenchmarkRegexpMatchHard_1K 165961 168420 +1.48%
BenchmarkRevcomp 995049292 892319935 -10.32%
BenchmarkTemplate 145623363 148998339 +2.32%
Fixes#8036.
LGTM=khr
R=golang-codereviews, josharian, khr
CC=golang-codereviews, iant, r
https://golang.org/cl/99660044
The rtype struct is meant to be a copy of reflect.rtype. The
zero field was added to reflect.rtype in 18495:6e50725ac753.
LGTM=rsc
R=khr, rsc
CC=golang-codereviews
https://golang.org/cl/93660045
[Same as CL 102820043 except applied changes to 6g/gsubr.c
also to 5g/gsubr.c and 8g/gsubr.c. The problem I had last night
trying to do that was that 8g's copy of nodarg has different
(but equivalent) control flow and I was pasting the new code
into the wrong place.]
Description from CL 102820043:
The 'nodarg' function is used to obtain a Node*
representing a function argument or result.
It returned a brand new Node*, but that violates
the guarantee in most places in the compiler that
two Node*s refer to the same variable if and only if
they are the same Node* pointer. Reestablish that
invariant by making nodarg return a preexisting
named variable if present.
Having fixed that, avoid any copy during x=x in
componentgen, because the VARDEF we emit
before the copy marks the lhs x as dead incorrectly.
The change in walk.c avoids modifying the result
of nodarg. This was the only place in the compiler
that did so.
Fixes#8097.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, iant, khr, r
https://golang.org/cl/103750043
Breaks 386 and arm builds.
The obvious reason is that this CL only edited 6g/gsubr.c
and failed to edit 5g/gsubr.c and 8g/gsubr.c.
However, the obvious CL applying the same edit to those
files (CL 101900043) causes mysterious build failures
in various of the standard package tests, usually involving
reflect. Something deep and subtle is broken but only on
the 32-bit systems.
Undo this CL for now.
««« original CL description
cmd/gc: fix x=x crash
The 'nodarg' function is used to obtain a Node*
representing a function argument or result.
It returned a brand new Node*, but that violates
the guarantee in most places in the compiler that
two Node*s refer to the same variable if and only if
they are the same Node* pointer. Reestablish that
invariant by making nodarg return a preexisting
named variable if present.
Having fixed that, avoid any copy during x=x in
componentgen, because the VARDEF we emit
before the copy marks the lhs x as dead incorrectly.
The change in walk.c avoids modifying the result
of nodarg. This was the only place in the compiler
that did so.
Fixes#8097.
LGTM=r, khr
R=golang-codereviews, r, khr
CC=golang-codereviews, iant
https://golang.org/cl/102820043
»»»
TBR=r
CC=golang-codereviews, khr
https://golang.org/cl/95660043
The 'nodarg' function is used to obtain a Node*
representing a function argument or result.
It returned a brand new Node*, but that violates
the guarantee in most places in the compiler that
two Node*s refer to the same variable if and only if
they are the same Node* pointer. Reestablish that
invariant by making nodarg return a preexisting
named variable if present.
Having fixed that, avoid any copy during x=x in
componentgen, because the VARDEF we emit
before the copy marks the lhs x as dead incorrectly.
The change in walk.c avoids modifying the result
of nodarg. This was the only place in the compiler
that did so.
Fixes#8097.
LGTM=r, khr
R=golang-codereviews, r, khr
CC=golang-codereviews, iant
https://golang.org/cl/102820043
Update #8112
Hide one-pass regexp API.
This means moving the code from regexp/syntax to regexp,
but it avoids being locked into the specific API chosen for
the implementation.
It also removes a slice field from the syntax.Inst, which
should avoid bloating the memory footprint of a non-one-pass
regexp unnecessarily.
LGTM=r
R=golang-codereviews, r
CC=golang-codereviews, iant
https://golang.org/cl/98610046
For incomplete struct S, C.T and C.struct_S were interchangeable in Go 1.2
and earlier, because all incomplete types were interchangeable
(even C.struct_S1 and C.struct_S2).
CL 76450043, which fixed issue 7409, made different incomplete types
different from Go's point of view, so that they were no longer completely
interchangeable.
However, imprecision about C.T and C.struct_S - really the same
underlying C type - is the one behavior enabled by the bug that
is most likely to be depended on by existing cgo code.
Explicitly allow it, to keep that code working.
Fixes#7786.
LGTM=iant, r
R=golang-codereviews, iant, r
CC=golang-codereviews
https://golang.org/cl/98580046
Map iteration order issue. Go 1.2 and earlier had stable results
for small maps.
Fixes#8115
LGTM=r, rsc
R=golang-codereviews, r
CC=dsymonds, golang-codereviews, iant, rsc
https://golang.org/cl/98580047
CL 51010045 fixed the first one of these:
cmd/gc: return canonical Node* from temp
For historical reasons, temp was returning a copy
of the created Node*, not the original Node*.
This meant that if analysis recorded information in the
returned node (for example, n->addrtaken = 1), the
analysis would not show up on the original Node*, the
one kept in fn->dcl and consulted during liveness
bitmap creation.
Correct this, and watch for it when setting addrtaken.
Fixes#7083.
R=khr, dave, minux.ma
CC=golang-codereviews
https://golang.org/cl/51010045
CL 53200043 fixed the second:
cmd/gc: fix race build
Missed this case in CL 51010045.
TBR=khr
CC=golang-codereviews
https://golang.org/cl/53200043
This CL fixes the third. There are only three nod(OXXX, ...)
calls in sinit.c, so maybe we're done. Embarassing that it
took three CLs to find all three.
Fixes#8028.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, iant
https://golang.org/cl/100800046
In the first very rough draft of the reordering code
that was introduced in the Go 1.3 cycle, the pre-allocated
temporary for a ... argument was held in n->right.
It moved to n->alloc but the code avoiding n->right
was left behind in order.c. In copy(x, <-c), the receive
is in n->right and must be processed. Delete the special
case code, removing the bug.
Fixes#8039.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/100820044
The code was assuming that pointer alignment is the
maximum alignment, but on NaCl uint64 alignment is
even more strict.
Brad checked in the test earlier today; this fixes the build.
Fixes#7863.
TBR=iant
CC=golang-codereviews
https://golang.org/cl/98630046