Change the stack unwinding code to compensate for the dynamic
relocation of symbols.
Change the gc instruction GC_CALL to use a relative offset instead of
an absolute address.
R=golang-dev
CC=golang-dev
https://golang.org/cl/7248048
* Separate internal and external LockOSThread, for cgo safety.
* Show goroutine that made faulting cgo call.
* Never start a panic due to a signal caused by a cgo call.
Fixes#3774.
Fixes#3775.
Fixes#3797.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/7228081
Binary data in mprof.goc may prevent the garbage collector from freeing
memory blocks. This patch replaces all calls to runtime·mallocgc() with
calls to an allocator private to mprof.goc, thus making the private
memory invisible to the garbage collector. The addrhash variable is
moved outside of the .bss section.
R=golang-dev, dvyukov, rsc, minux.ma
CC=dave, golang-dev, remyoudompheng
https://golang.org/cl/7135063
This change also resolves some issues with note handling: we now make
sure that there is enough room at the bottom of every goroutine to
execute the note handler, and the `exitstatus' is no longer a global
entity, which resolves some race conditions.
R=rminnich, npe, rsc, ality
CC=golang-dev
https://golang.org/cl/6569068
Range access functions are already available in TSan library
but were not yet used.
Time for go test -race -short:
Before:
compress/flate 24.244s
exp/norm >200s
go/printer 78.268s
After:
compress/flate 17.760s
exp/norm 5.537s
go/printer 5.738s
Fixes#4250.
R=dvyukov, golang-dev, fullung
CC=golang-dev
https://golang.org/cl/7229044
Useful for debugging of runtime bugs.
+ Do not print "stack segment boundary" unless GOTRACEBACK>1.
+ Do not traceback system goroutines unless GOTRACEBACK>1.
R=rsc, minux.ma
CC=golang-dev
https://golang.org/cl/7098050
Mark candidate spans one GC pass earlier.
Move scavenger's code out from mgc0 and constrain it into mheap (where it belongs).
R=rsc, dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/7002049
This is for SPARC64, a 64-bit processor that uses all 64-bits
of virtual addresses. The idea is to use the low order 3 bits
to at least get a small ABA counter. That should work since
pointers are aligned. The idea is for SPARC64 to set CNT_MASK
== 7, PTR_BITS == 0, PTR_MASK == 0xffffffffffffff8.
Also add uintptr casts to avoid GCC warnings. The gccgo
runtime code is compiled with GCC, and GCC warns when casting
between a pointer and a type of a different size.
R=dvyukov
CC=golang-dev
https://golang.org/cl/7225043
Currently it's summed to mark phase.
The change makes it easier to diagnose long stop-the-world phases.
R=golang-dev, bradfitz, dave
CC=golang-dev
https://golang.org/cl/7182043
If the scanned block has no typeinfo the garbage collector will attempt
to get the actual type of the block.
R=golang-dev, bradfitz, rsc
CC=golang-dev
https://golang.org/cl/7093045
Also undo revision a5b96b602690 used to workaround the bug.
Fixes#4643.
R=rsc, golang-dev, dave, minux.ma, lucio.dere, bradfitz
CC=golang-dev
https://golang.org/cl/7090043
The Plan 9 symbol table format defines big-endian symbol values
for portability, but we want to be able to generate an ELF object file
and let the host linker link it, as part of the solution to issue 4069.
The symbol table itself, since it is loaded into memory at run time,
must be filled in by the final host linker, using relocation directives
to set the symbol values. On a little-endian machine, the linker will
only fill in little-endian values during relocation, so we are forced
to use little-endian symbol values.
To preserve most of the original portability of the symbol table
format, we make the table itself say whether it uses big- or
little-endian values. If the table begins with the magic sequence
fe ff ff ff 00 00
then the actual table begins after those six bytes and contains
little-endian symbol values. Otherwise, the table is in the original
format and contains big-endian symbol values. The magic sequence
looks like an "end of table" entry (the fifth byte is zero), so legacy
readers will see a little-endian table as an empty table.
All the gc architectures are little-endian today, so the practical
effect of this CL is to make all the generated tables little-endian,
but if a big-endian system comes along, ld will not generate
the magic sequence, and the various readers will fall back to the
original big-endian interpretation.
R=ken2
CC=golang-dev
https://golang.org/cl/7066043
sysarch requires arguments to be passed on the stack, not in registers.
Credit to Shenghou Ma (minux) for the fix.
R=minux.ma, devon.odell
CC=golang-dev
https://golang.org/cl/7037043
Used to then die on a nil pointer situation. Most Linux standard setups are rather
restrictive regarding the default amount of lockable memory.
R=minux.ma, rsc
CC=golang-dev
https://golang.org/cl/6997049
Currently it silently "succeeds" saying that it run 0 tests
if there are compilations errors.
With this change it fails and outputs the compilation error.
R=golang-dev, remyoudompheng
CC=golang-dev
https://golang.org/cl/7002058
When we release memory to the OS, if the OS doesn't want us
to release it (for example, because the program executed
mlockall(MCL_FUTURE)), madvise will fail. Ignore the failure
instead of crashing.
Fixes#3435.
R=ken2
CC=golang-dev
https://golang.org/cl/6998052
This disables checks for limited address space
and unlimited stack. They are not required for Go.
Fixes#4577.
R=golang-dev, iant
CC=golang-dev, kamil.kisiel, minux.ma
https://golang.org/cl/7003045
Enable cgo on OpenBSD.
The OpenBSD ld.so(1) does not currently support PT_TLS sections. Work
around this by fixing up the TCB that has been provided by librthread
and reallocating a TCB with additional space for TLS. Also provide a
wrapper for pthread_create, allowing zeroed TLS to be allocated for
threads created externally to Go.
Joint work with Shenghou Ma (minux).
Requires change 6846064.
Fixes#3205.
R=golang-dev, minux.ma, iant, rsc, iant
CC=golang-dev
https://golang.org/cl/6853059
With this change the runtime can now read GOMAXPROCS, GOGC, etc.
I'm not quite sure how we missed this.
R=seed, lucio.dere, rsc
CC=golang-dev
https://golang.org/cl/6935062
The code:
func main() {
v := make([]int64, 10)
i := 1
_ = v[(i*4)/3]
}
crashes compiler with:
Program received signal SIGSEGV, Segmentation fault.
0x000000000043c274 in walkexpr (np=0x7fffffffc9b8, init=0x0) at src/cmd/gc/walk.c:587
587 *init = concat(*init, n->ninit);
(gdb) bt
#0 0x000000000043c274 in walkexpr (np=0x7fffffffc9b8, init=0x0) at src/cmd/gc/walk.c:587
#1 0x0000000000432d15 in copyexpr (n=0x7ffff7f69a48, t=<optimized out>, init=0x0) at src/cmd/gc/subr.c:2020
#2 0x000000000043f281 in walkdiv (init=0x0, np=0x7fffffffca70) at src/cmd/gc/walk.c:2901
#3 walkexpr (np=0x7ffff7f69760, init=0x0) at src/cmd/gc/walk.c:956
#4 0x000000000043d801 in walkexpr (np=0x7ffff7f69bc0, init=0x0) at src/cmd/gc/walk.c:988
#5 0x000000000043cc9b in walkexpr (np=0x7ffff7f69d38, init=0x0) at src/cmd/gc/walk.c:1068
#6 0x000000000043c50b in walkexpr (np=0x7ffff7f69f50, init=0x0) at src/cmd/gc/walk.c:879
#7 0x000000000043c50b in walkexpr (np=0x7ffff7f6a0c8, init=0x0) at src/cmd/gc/walk.c:879
#8 0x0000000000440a53 in walkexprlist (l=0x7ffff7f6a0c8, init=0x0) at src/cmd/gc/walk.c:357
#9 0x000000000043d0bf in walkexpr (np=0x7fffffffd318, init=0x0) at src/cmd/gc/walk.c:566
#10 0x00000000004402bf in vmkcall (fn=<optimized out>, t=0x0, init=0x0, va=0x7fffffffd368) at src/cmd/gc/walk.c:2275
#11 0x000000000044059a in mkcall (name=<optimized out>, t=0x0, init=0x0) at src/cmd/gc/walk.c:2287
#12 0x000000000042862b in callinstr (np=0x7fffffffd4c8, init=0x7fffffffd568, wr=0, skip=<optimized out>) at src/cmd/gc/racewalk.c:478
#13 0x00000000004288b7 in racewalknode (np=0x7ffff7f68108, init=0x7fffffffd568, wr=0, skip=0) at src/cmd/gc/racewalk.c:287
#14 0x0000000000428781 in racewalknode (np=0x7ffff7f65840, init=0x7fffffffd568, wr=0, skip=0) at src/cmd/gc/racewalk.c:302
#15 0x0000000000428abd in racewalklist (l=0x7ffff7f65840, init=0x0) at src/cmd/gc/racewalk.c:97
#16 0x0000000000428d0b in racewalk (fn=0x7ffff7f5f010) at src/cmd/gc/racewalk.c:63
#17 0x0000000000402b9c in compile (fn=0x7ffff7f5f010) at src/cmd/6g/../gc/pgen.c:67
#18 0x0000000000419f86 in funccompile (n=0x7ffff7f5f010, isclosure=0) at src/cmd/gc/dcl.c:1414
#19 0x0000000000424161 in p9main (argc=<optimized out>, argv=<optimized out>) at src/cmd/gc/lex.c:431
#20 0x0000000000401739 in main (argc=<optimized out>, argv=<optimized out>) at src/lib9/main.c:35
The problem is nil init passed to mkcall().
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6940045
Details:
- This CL is the conceptual skeleton of code found in CL 6114046
- The garbage collector uses struct Obj to specify memory blocks
- scanblock() is putting found memory blocks into an intermediate buffer
(xbuf) before adding/flushing them to the main work buffer (wbuf)
- The main loop in scanblock() is replaced with a skeleton code that
in the future will be able to recognize the type of objects and
thus will improve the garbage collector's precision.
For now, all objects are simply sequences of pointers so
the precision of the garbage collector remains unchanged.
- The code plugs .gcdata and .gcbss sections into the garbage collector.
scanblock() in this CL is unable to make any use of this.
R=rsc, dvyukov, remyoudompheng
CC=dave, golang-dev, minux.ma
https://golang.org/cl/6856121
This includes GORACE history_size and log_path flags.
R=golang-dev, bradfitz, rsc, remyoudompheng, minux.ma
CC=golang-dev
https://golang.org/cl/6947046
When a race happens inside of runtime (chan, slice, etc),
currently reports contain only user file:line.
If the line contains a complex expression,
it's difficult to figure out where the race exactly.
This change adds one more top frame with exact
runtime function (e.g. runtime.chansend, runtime.mapaccess).
R=golang-dev
CC=golang-dev
https://golang.org/cl/6851125
Garbage collection code (to be merged later) is calling functions
which have many local variables. This increases the probability that
the stack capacity won't be big enough to hold the local variables.
So, start gc() on a bigger stack to eliminate a potentially large number
of calls to runtime·morestack().
R=rsc, remyoudompheng, dsymonds, minux.ma, iant, iant
CC=golang-dev
https://golang.org/cl/6846044
madvise was missing so implement it in assembler. This change
needs to be extended to the other BSD variantes (Net and Open)
Without this change the scavenger will attempt to pass memory back
to the operating system when it has become idle, but the memory is
not returned and for long running Go processes the total memory used
can grow until OOM occurs.
I have only been able to test the code on FreeBSD AMD64. The ARM
platforms needs testing.
R=golang-dev, mikioh.mikioh, dave, jgc, minux.ma
CC=golang-dev
https://golang.org/cl/6850081
Update OpenBSD runtime to use the new version of the sys___tfork
syscall and switch TLS initialisation from sys_arch to sys___set_tcb
(note that both of these syscalls are available in OpenBSD 5.2).
R=golang-dev, minux.ma
CC=golang-dev
https://golang.org/cl/6843058
This enables to loop over some goroutines, e.g. to print the
backtrace of goroutines 1 to 9:
set $i = 1
while $i < 10
printf "backtrace of goroutine %d:\n", $i
goroutine $i++ bt
end
R=lvd, lvd
CC=golang-dev
https://golang.org/cl/6843071
This significantly decreases amount of shadow memory
mapped by race detector.
I haven't tested on Windows, but on Linux it reduces
virtual memory size from 1351m to 330m for fmt.test.
Fixes#4379.
R=golang-dev, alex.brainman, iant
CC=golang-dev
https://golang.org/cl/6849057
Currently race detector runtime just disables race detection in the finalizer goroutine.
It has false positives when a finalizer writes to shared memory -- the race with finalizer is reported in a normal goroutine that accesses the same memory.
After this change I am going to synchronize the finalizer goroutine with the rest of the world in racefingo(). This is closer to what happens in reality and so
does not have false positives.
And also add README file with instructions how to build the runtime.
R=golang-dev, minux.ma, rsc
CC=golang-dev
https://golang.org/cl/6810095
It allows to catch e.g. a data race between atomic write and non-atomic write,
or Mutex.Lock() and mutex overwrite (e.g. mu = Mutex{}).
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6817103
In order to add these, we need to be able to find references
to such types that already exist in the binary. To do that, introduce
a new linker section holding a list of the types corresponding to
arrays, chans, maps, and slices.
To offset the storage cost of this list, and to simplify the code,
remove the interface{} header from the representation of a
runtime type. It was used in early versions of the code but was
made obsolete by the kind field: a switch on kind is more efficient
than a type switch.
In the godoc binary, removing the interface{} header cuts two
words from each of about 10,000 types. Adding back the list of pointers
to array, chan, map, and slice types reintroduces one word for
each of about 500 types. On a 64-bit machine, then, this CL *removes*
a net 156 kB of read-only data from the binary.
This CL does not include the needed support for precise garbage
collection. I have created issue 4375 to track that.
This CL also does not set the 'algorithm' - specifically the equality
and copy functions - for a new array correctly, so I have unexported
ArrayOf for now. That is also part of issue 4375.
Fixes#2339.
R=r, remyoudompheng, mirtchovski, iant
CC=golang-dev
https://golang.org/cl/6572043
Otherwise a poorly timed GC can collect the memory before it
is returned to the Go program.
R=golang-dev, dave, dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/6819119
Re-enable the crash tests on NetBSD now that the issue has been
identified and fixed.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/6813100
Currently race detector runtime maps shadow memory eagerly at process startup.
It works poorly on Windows, because Windows requires reservation in swap file
(especially problematic if several Go program runs at the same, each consuming GBs
of memory).
With this change race detector maps shadow memory lazily, so Go runtime must notify
about all new heap memory.
It will help with Windows port, but also eliminates scary 16TB virtual mememory
consumption in top output (which sometimes confuses some monitoring scripts).
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6811085
When the first result of a type assertion is blank, the compiler would still copy out a potentially large non-interface type.
Fixes#1021.
R=golang-dev, bradfitz, rsc
CC=golang-dev
https://golang.org/cl/6812079
It speedups the race detector somewhat, but also prevents
getcallerpc() from obtaining lessstack().
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/6812091
The deadlock occurs when another goroutine requests GC
during the test. When wait=true the test expects physical parallelism,
that is, that P goroutines are all active at the same time.
If GC is requested, then part of the goroutines are not scheduled,
so other goroutines deadlock.
With wait=false, goroutines finish parallel for w/o waiting for all
other goroutines.
Fixes#3954.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/6820098
The race detector does not understand ParFor synchronization, because it's implemented in C.
If run with -cpu=2 currently race detector says:
WARNING: DATA RACE
Read by goroutine 5:
runtime_test.TestParForParallel()
src/pkg/runtime/parfor_test.go:118 +0x2e0
testing.tRunner()
src/pkg/testing/testing.go:301 +0x8f
Previous write by goroutine 6:
runtime_test.func·024()
src/pkg/runtime/parfor_test.go:111 +0x52
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/6811082
PauseNs is a circular buffer of recent pause times, and the
most recent one is at [((NumGC-1)+256)%256].
Also fix comments cross-linking the Go and C definition of
various structs.
R=golang-dev, rsc, bradfitz
CC=golang-dev
https://golang.org/cl/6657047
If source are not available, then the stack looks like:
stack_test.go:40: /tmp/gobuilder/linux-amd64-race-72b15c5d6f65/go/src/pkg/runtime/debug/bla-bla-bla/src/pkg/runtime/debug/stack_test.go:15 (0x43fb11)
stack_test.go:40: /tmp/gobuilder/linux-amd64-race-72b15c5d6f65/go/src/pkg/runtime/debug/bla-bla-bla/src/pkg/runtime/debug/stack_test.go:18 (0x43fb7a)
stack_test.go:40: /tmp/gobuilder/linux-amd64-race-72b15c5d6f65/go/src/pkg/runtime/debug/bla-bla-bla/src/pkg/runtime/debug/stack_test.go:37 (0x43fbf4)
stack_test.go:40: /tmp/gobuilder/linux-amd64-race-72b15c5d6f65/go/src/pkg/testing/bla-bla-bla/src/pkg/testing/testing.go:301 (0x43b5ba)
stack_test.go:40: /tmp/gobuilder/linux-amd64-race-72b15c5d6f65/go/src/pkg/runtime/bla-bla-bla/src/pkg/runtime/proc.c:276 (0x410670)
stack_test.go:40:
which is 6 lines.
R=golang-dev, minux.ma
CC=golang-dev
https://golang.org/cl/6637060
Check for specific, important misalignment in garbage collector.
Not a complete fix for issue 599 but an important workaround.
Update #599.
R=golang-dev, iant, dvyukov
CC=golang-dev
https://golang.org/cl/6641049
Also add call to GC() to make it easier to re-enable the test.
Update #4155.
When we have precise GC merged, re-enable this test.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/6622058
The profiler collects goroutine blocking information similar to Google Perf Tools.
You may see an example of the profile (converted to svg) attached to
http://code.google.com/p/go/issues/detail?id=3946
The public API changes are:
+pkg runtime, func BlockProfile([]BlockProfileRecord) (int, bool)
+pkg runtime, func SetBlockProfileRate(int)
+pkg runtime, method (*BlockProfileRecord) Stack() []uintptr
+pkg runtime, type BlockProfileRecord struct
+pkg runtime, type BlockProfileRecord struct, Count int64
+pkg runtime, type BlockProfileRecord struct, Cycles int64
+pkg runtime, type BlockProfileRecord struct, embedded StackRecord
R=rsc, dave, minux.ma, r
CC=gobot, golang-dev, r, remyoudompheng
https://golang.org/cl/6443115
The Go run-time assumes that all SSE floating-point exceptions
are masked so that Go programs are not broken by such invalid
operations. By default, the 64-bit version of the Plan 9 kernel
masks only some SSE floating-point exceptions. Here, we mask
them all on a per-thread basis.
R=rsc, rminnich, minux.ma
CC=golang-dev
https://golang.org/cl/6592056
The assembly offsets were converted mechanically using
code.google.com/p/rsc/cmd/asmlint. The instruction
changes were done by hand.
Fixes#2188.
R=iant, r, bradfitz, remyoudompheng
CC=golang-dev
https://golang.org/cl/6550058
This CL makes the runtime understand that the type of
the len or cap of a map, slice, or string is 'int', not 'int32',
and it is also careful to distinguish between function arguments
and results of type 'int' vs type 'int32'.
In the runtime, the new typedefs 'intgo' and 'uintgo' refer
to Go int and uint. The C types int and uint continue to be
unavailable (cause intentional compile errors).
This CL does not change the meaning of int, but it should make
the eventual change of the meaning of int on amd64 a bit
smoother.
Update #2188.
R=iant, r, dave, remyoudompheng
CC=golang-dev
https://golang.org/cl/6551067
Using offsets from Tos is cumbersome and we've had problems
in the past. Since it's only being used to grab the PID, we'll just
get that from the default TLS instead.
R=rsc, rminnich, npe
CC=golang-dev
https://golang.org/cl/6543049
The change is a preparation for the new scheduler.
It introduces runtime.park() function,
that will atomically unlock the mutex and park the goroutine.
It will allow to remove the racy readyonstop flag
that is difficult to implement w/o the global scheduler mutex.
R=rsc, remyoudompheng, dave
CC=golang-dev
https://golang.org/cl/6501077
Fixes#3456.
This proposal is a reformulation of CL 5987063. This CL resets
the default GOARM value to 6 and allows the use of the VFPv3
optimisation if GOARM=7. Binaries built with this CL in place
will abort if GOARM=7 was used and the target host does not
support VFPv3.
R=minux.ma, rsc, ajstarks
CC=golang-dev
https://golang.org/cl/6501099
Fixes#3911.
Requires CL 6449127.
dfc@qnap:~$ ./runtime.test
runtime: this CPU has no floating point hardware, so it cannot run
this GOARM=7 binary. Recompile using GOARM=5.
R=rsc, minux.ma
CC=golang-dev
https://golang.org/cl/6442109
Signal handlers are global resources but many language
environments (Go, C++ at Google, etc) assume they have sole
ownership of a particular handler. Signal handlers in
mixed-language applications must therefore be robust against
unexpected delivery of certain signals, such as SIGPROF.
The default Go signal handler runtime·sigtramp assumes that it
will never be called on a non-Go thread, but this assumption
is violated by when linking in C++ code that spawns threads.
Specifically, the handler asserts the thread has an associated
"m" (Go scheduler).
This CL is a very simple workaround: discard SIGPROF delivered to non-Go threads. runtime.badsignal(int32) now receives the signal number; if it returns without panicking (e.g. sig==SIGPROF) the signal is discarded.
I don't think there is any really satisfactory solution to the
problem of signal-based profiling in a mixed-language
application. It's not only the issue of handler clobbering,
but also that a C++ SIGPROF handler called in a Go thread
can't unwind the Go stack (and vice versa). The best we can
hope for is not crashing.
Note:
- I've ported this to all POSIX platforms, except ARM-linux which already ignores unexpected signals on m-less threads.
- I've avoided tail-calling runtime.badsignal because AFAICT the 6a/6l don't support it.
- I've avoided hoisting 'push sig' (common to both function calls) because it makes the code harder to read.
- Fixed an (apparently incorrect?) docstring.
R=iant, rsc, minux.ma
CC=golang-dev
https://golang.org/cl/6498057
Reverts part of CL 6460082.
If a doc comment describes a type by explaining the
meaning of one instance of the type, a leading article
is fine and makes the text less awkward.
Compare:
// A dog is a kind of animal.
// Dog is a kind of animal.
R=golang-dev, dsymonds, dvyukov, r
CC=golang-dev
https://golang.org/cl/6494066
This set of changes extends the Plan 9 support
to include the AMD64 architecture and should
work on all versions of Plan 9.
R=golang-dev, rminnich, noah.evans, rsc, minux.ma, npe
CC=akskuma, golang-dev, jfflore, noah.evans
https://golang.org/cl/6479052
Use version 2 of the NetBSD signal ABI - both version 2 and version 3
are supported by the kernel, with near identical behaviour. However,
the netbsd32 compat code does not allow version 3 to be used, which
prevents Go netbsd/386 binaries from running in compat mode on a
NetBSD amd64 kernel. Switch to version 2 of the ABI, which is the
same version currently used by NetBSD's libc.
R=minux.ma
CC=golang-dev
https://golang.org/cl/6476068
When manipulating the stack pointer use the UESP register instead
of the ESP register, since the UESP register is the one that gets
restored from the machine context. Fixes broken tests on netbsd/386.
R=golang-dev, minux.ma, r, bsiegert
CC=golang-dev
https://golang.org/cl/6465054
Disable the crash handler test on NetBSD until I can figure out why
it triggers failures in later tests.
R=golang-dev, minux.ma
CC=golang-dev
https://golang.org/cl/6460090
Surrogate halves are part of UTF-16 and should never appear in UTF-8.
(The rune that two combined halves represent in UTF-16 should
be encoded directly.)
Encoding: encode as RuneError.
Decoding: convert to RuneError, consume one byte.
This requires changing:
package unicode/utf8
runtime for range over string
Also added utf8.ValidRune and fixed bug in utf.RuneLen.
Fixes#3927.
R=golang-dev, rsc, bsiegert
CC=golang-dev
https://golang.org/cl/6458099
Depends on CL 6197045.
Result obtained on Core i7 620M, Darwin/amd64:
benchmark old ns/op new ns/op delta
BenchmarkComplex128DivNormal 57 28 -50.78%
BenchmarkComplex128DivNisNaN 49 15 -68.90%
BenchmarkComplex128DivDisNaN 49 15 -67.88%
BenchmarkComplex128DivNisInf 40 12 -68.50%
BenchmarkComplex128DivDisInf 33 13 -61.06%
Result obtained on Core i7 620M, Darwin/386:
benchmark old ns/op new ns/op delta
BenchmarkComplex128DivNormal 89 50 -44.05%
BenchmarkComplex128DivNisNaN 307 802 +161.24%
BenchmarkComplex128DivDisNaN 309 788 +155.02%
BenchmarkComplex128DivNisInf 278 237 -14.75%
BenchmarkComplex128DivDisInf 46 22 -52.46%
Result obtained on 700MHz OMAP4460, Linux/ARM:
benchmark old ns/op new ns/op delta
BenchmarkComplex128DivNormal 1557 465 -70.13%
BenchmarkComplex128DivNisNaN 1443 220 -84.75%
BenchmarkComplex128DivDisNaN 1481 218 -85.28%
BenchmarkComplex128DivNisInf 952 216 -77.31%
BenchmarkComplex128DivDisInf 861 231 -73.17%
The 386 version has a performance regression, but as we have
decided to use SSE2 instead of x87 FPU for 386 too (issue 3912),
I won't address this issue.
R=dsymonds, mchaten, iant, dave, mtj, rsc, r
CC=golang-dev
https://golang.org/cl/6024045
our old choice is not working properly at least on VFPv2 in
ARM1136JF-S (it's not preserved across float64->float32 conversions).
Fixes#3745.
R=dave, rsc
CC=golang-dev
https://golang.org/cl/6344078
Since NUL usually terminates strings in underlying syscalls, allowing
it when converting string arguments is a security risk, especially
when dealing with filenames. For example, a program might reason that
filename like "/root/..\x00/" is a subdirectory or "/root/" and allow
access to it, while underlying syscall will treat "\x00" as an end of
that string and the actual filename will be "/root/..", which might
be unexpected. Returning EINVAL when string arguments have NUL in
them makes sure this attack vector is unusable.
R=golang-dev, r, bradfitz, fullung, rsc, minux.ma
CC=golang-dev
https://golang.org/cl/6458050