ReadMIMEHeader is used by net/http, net/mail, and
mime/multipart.
Don't do so many small allocations. Calculate up front
how much we'll probably need.
benchmark old ns/op new ns/op delta
BenchmarkReadMIMEHeader 8433 7467 -11.45%
benchmark old allocs new allocs delta
BenchmarkReadMIMEHeader 23 14 -39.13%
benchmark old bytes new bytes delta
BenchmarkReadMIMEHeader 1705 1343 -21.23%
R=golang-dev, r, iant, adg
CC=golang-dev
https://golang.org/cl/8179043
origin/master is always a remote branch, and it doesn't make sense to
switch to a remote branch. master is the default branch that tracks it.
R=adg
CC=golang-dev, matt.jibson
https://golang.org/cl/10869046
This does not include AES-GCM yet. Also, it assumes that the handshake and
certificate signature hash are always SHA-256, which is true of the ciphersuites
that we currently support.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10762044
using m->tls[0] to save ucontext pointer is not re-entry safe, and
the old code didn't set it before the early return when signal is
received on non-Go threads.
so misc/cgo/test used to hang when testing issue 5337.
R=golang-dev, bradfitz, rsc
CC=golang-dev
https://golang.org/cl/10076045
Escape analysis needs the right curfn value on a dclfunc node, otherwise it will not analyze the function.
When generating method value wrappers, we forgot to set the curfn correctly.
Fixes#5753.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10383048
A struct with a single field was considered as equivalent to the
field type, which is incorrect is the field is blank.
Fields with padding could make the compiler think some
types are comparable when they are not.
Fixes#5698.
R=rsc, golang-dev, daniel.morsing, bradfitz, gri, r
CC=golang-dev
https://golang.org/cl/10271046
When deleting a timer, a panic due to nil deref
would leave a lock held, possibly leading to a deadlock
in a defer. Instead return false on a nil timer.
Fixes#5745.
R=golang-dev, daniel.morsing, dvyukov, rsc, iant
CC=golang-dev
https://golang.org/cl/10373047
This CL provides stable in-place sorting by use of
bottom up merge sort with in-place merging done by
the SymMerge algorithm from P.-S. Kim and A. Kutzner.
The additional space needed for stable sorting (in the form of
stack space) is logarithmic in the inputs size n.
Number of calls to Less and Swap grow like O(n * log n) and
O(n * log n * log n):
Stable sorting random data uses significantly more calls
to Swap than the unstable quicksort implementation (5 times more
on n=100, 10 times more on n=1e4 and 23 times more on n=1e8).
The number of calls to Less is practically the same for Sort and
Stable.
Stable sorting 1 million random integers takes 5 times longer
than using Sort.
BenchmarkSortString1K 50000 328662 ns/op
BenchmarkStableString1K 50000 380231 ns/op 1.15 slower
BenchmarkSortInt1K 50000 157336 ns/op
BenchmarkStableInt1K 50000 191167 ns/op 1.22 slower
BenchmarkSortInt64K 1000 14466297 ns/op
BenchmarkStableInt64K 500 16190266 ns/op 1.12 slower
BenchmarkSort1e2 200000 64923 ns/op
BenchmarkStable1e2 50000 167128 ns/op 2.57 slower
BenchmarkSort1e4 1000 14540613 ns/op
BenchmarkStable1e4 100 58117289 ns/op 4.00 slower
BenchmarkSort1e6 5 2429631508 ns/op
BenchmarkStable1e6 1 12077036952 ns/op 4.97 slower
R=golang-dev, bradfitz, iant, 0xjnml, rsc
CC=golang-dev
https://golang.org/cl/9612044
Design doc at golang.org/s/go12slice.
This is an experimental feature and may not be included in the release.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/10743046
There are various problems, and both Dmitriy and I
will be away for the next week. Make the runtime a bit
more stable while we're gone.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/10848043
fn can clearly hold a closure in memory.
argp/pc point into stack and so can hold
in memory a block that was previously
a large stack serment.
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/10784043
Depending on net/http means depending on cgo.
When the tree is in a shaky state it's nice to see sync/atomic
pass even if cgo or net causes broken binaries.
R=golang-dev, dave, r
CC=golang-dev
https://golang.org/cl/10753044
This change adds a basic compiler plugin for Go. The plugin
integrates "go build" with Vim's ":make" command and the
quickfix list.
Fixes#5751.
R=golang-dev, dsymonds, niklas.schnelle, 0xjnml
CC=golang-dev
https://golang.org/cl/10466043
After loading a frame of a GIF, check that each pixel
is inside the frame's palette.
Fixes#5401.
R=nigeltao, r
CC=golang-dev
https://golang.org/cl/10597043
A casualty of https://golang.org/cl/10195044.
If x is an 32-bit int and u is a 64-bit ulong,
u = (uint)x // converts to uint before extension, so zero fills
u = (ulong)x // sign-extends
TBR=iant, r
CC=golang-dev
https://golang.org/cl/10814043
Exported inlined functions that perform a string conversion
using a non-exported named type may miss it in export data.
Fixes#5755.
R=rsc, golang-dev, ality, r
CC=golang-dev
https://golang.org/cl/10464043
On amd64 the frames are very close to the limit for a
nosplit (textflag 7) function, in part because the C compiler
does not make any attempt to reclaim space allocated for
completely registerized variables. Avoid a few short-lived
variables to reclaim two words.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/10758043
Keeping the string "compactframe" because that's what
I always search for to find this code. But point to the real place too.
TBR=iant
CC=golang-dev
https://golang.org/cl/10676047
Currently it replaces GOGCTRACE env var (GODEBUG=gctrace=1).
The plan is to extend it with other type of debug tracing,
e.g. GODEBUG=gctrace=1,schedtrace=100.
R=rsc
CC=bradfitz, daniel.morsing, gobot, golang-dev
https://golang.org/cl/10026045
The last patch for preemptive scheduler,
with this change stoptheworld issues preemption
requests every 100us.
Update #543.
R=golang-dev, daniel.morsing, rsc
CC=golang-dev
https://golang.org/cl/10264044
The new -coverpkg flag allows computing coverage in
one set of packages while running the tests of a different set.
Also clean up some of the previous CL's recompileForTest,
using packageList to avoid the clumsy recursion.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/10705043
On x86 it is a few words lower on the stack than m->morebuf.sp
so it is a more precise check. Enabling the check requires recording
a valid gp->sched in reflect.call too. This is a good thing in general,
since it will make stack traces during reflect.call work better, and it
may be useful for preemption too.
R=dvyukov
CC=golang-dev
https://golang.org/cl/10709043
runtime.entersyscall() sets g->status = Gsyscall,
then calls runtime.lock() which causes stack split.
runtime.newstack() resets g->status to Grunning.
This will lead to crash during GC (world is not stopped) or GC will scan stack incorrectly.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10696043
Also use 2048-bit RSA keys as default in generate_cert.go,
as recommended by the NIST.
R=golang-dev, rsc, bradfitz
CC=golang-dev
https://golang.org/cl/10676043
On my 64-bit machine, despite being 32-bit code, fixed-base
multiplications are 7.1x faster and arbitary multiplications are 2.6x
faster.
It is difficult to review this change. However, the code is essentially
the same as code that has been open-sourced in Chromium. There it has
been successfully performing P-256 operations for several months on
many machines so the arithmetic of the code should be sound.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10551044
Failure on bot:
http://build.golang.org/log/f4c648906e1289ec2237c1d0880fb1a8b1852a08
««« original CL description
runtime: fix CPU underutilization
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes#5586.
R=rsc
TBR=rsc
CC=gobot, golang-dev
https://golang.org/cl/9776044
»»»
R=golang-dev
CC=golang-dev
https://golang.org/cl/10692043
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes#5586.
R=rsc
CC=gobot, golang-dev
https://golang.org/cl/9776044
Current code can print more arguments than necessary
and also incorrectly prints "...".
Update #5723.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10689043
Until now, the goroutine state has been scattered during the
execution of newstack and oldstack. It's all there, and those routines
know how to get back to a working goroutine, but other pieces of
the system, like stack traces, do not. If something does interrupt
the newstack or oldstack execution, the rest of the system can't
understand the goroutine. For example, if newstack decides there
is an overflow and calls throw, the stack tracer wouldn't dump the
goroutine correctly.
For newstack to save a useful state snapshot, it needs to be able
to rewind the PC in the function that triggered the split back to
the beginning of the function. (The PC is a few instructions in, just
after the call to morestack.) To make that possible, we change the
prologues to insert a jmp back to the beginning of the function
after the call to morestack. That is, the prologue used to be roughly:
TEXT myfunc
check for split
jmpcond nosplit
call morestack
nosplit:
sub $xxx, sp
Now an extra instruction is inserted after the call:
TEXT myfunc
start:
check for split
jmpcond nosplit
call morestack
jmp start
nosplit:
sub $xxx, sp
The jmp is not executed directly. It is decoded and simulated by
runtime.rewindmorestack to discover the beginning of the function,
and then the call to morestack returns directly to the start label
instead of to the jump instruction. So logically the jmp is still
executed, just not by the cpu.
The prologue thus repeats in the case of a function that needs a
stack split, but against the cost of the split itself, the extra few
instructions are noise. The repeated prologue has the nice effect of
making a stack split double-check that the new stack is big enough:
if morestack happens to return on a too-small stack, we'll now notice
before corruption happens.
The ability for newstack to rewind to the beginning of the function
should help preemption too. If newstack decides that it was called
for preemption instead of a stack split, it now has the goroutine state
correctly paused if rescheduling is needed, and when the goroutine
can run again, it can return to the start label on its original stack
and re-execute the split check.
Here is an example of a split stack overflow showing the full
trace, without any special cases in the stack printer.
(This one was triggered by making the split check incorrect.)
runtime: newstack framesize=0x0 argsize=0x18 sp=0x6aebd0 stack=[0x6b0000, 0x6b0fa0]
morebuf={pc:0x69f5b sp:0x6aebd8 lr:0x0}
sched={pc:0x68880 sp:0x6aebd0 lr:0x0 ctxt:0x34e700}
runtime: split stack overflow: 0x6aebd0 < 0x6b0000
fatal error: runtime: split stack overflow
goroutine 1 [stack split]:
runtime.mallocgc(0x290, 0x100000000, 0x1)
/Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:21 fp=0x6aebd8
runtime.new()
/Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:682 +0x5b fp=0x6aec08
go/build.(*Context).Import(0x5ae340, 0xc210030c71, 0xa, 0xc2100b4380, 0x1b, ...)
/Users/rsc/g/go/src/pkg/go/build/build.go:424 +0x3a fp=0x6b00a0
main.loadImport(0xc210030c71, 0xa, 0xc2100b4380, 0x1b, 0xc2100b42c0, ...)
/Users/rsc/g/go/src/cmd/go/pkg.go:249 +0x371 fp=0x6b01a8
main.(*Package).load(0xc21017c800, 0xc2100b42c0, 0xc2101828c0, 0x0, 0x0, ...)
/Users/rsc/g/go/src/cmd/go/pkg.go:431 +0x2801 fp=0x6b0c98
main.loadPackage(0x369040, 0x7, 0xc2100b42c0, 0x0)
/Users/rsc/g/go/src/cmd/go/pkg.go:709 +0x857 fp=0x6b0f80
----- stack segment boundary -----
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc2100e6c00, 0xc2100e5750, ...)
/Users/rsc/g/go/src/cmd/go/build.go:539 +0x437 fp=0x6b14a0
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc21015b400, 0x2, ...)
/Users/rsc/g/go/src/cmd/go/build.go:528 +0x1d2 fp=0x6b1658
main.(*builder).test(0xc2100902a0, 0xc210092000, 0x0, 0x0, 0xc21008ff60, ...)
/Users/rsc/g/go/src/cmd/go/test.go:622 +0x1b53 fp=0x6b1f68
----- stack segment boundary -----
main.runTest(0x5a6b20, 0xc21000a020, 0x2, 0x2)
/Users/rsc/g/go/src/cmd/go/test.go:366 +0xd09 fp=0x6a5cf0
main.main()
/Users/rsc/g/go/src/cmd/go/main.go:161 +0x4f9 fp=0x6a5f78
runtime.main()
/Users/rsc/g/go/src/pkg/runtime/proc.c:183 +0x92 fp=0x6a5fa0
runtime.goexit()
/Users/rsc/g/go/src/pkg/runtime/proc.c:1266 fp=0x6a5fa8
And here is a seg fault during oldstack:
SIGSEGV: segmentation violation
PC=0x1b2a6
runtime.oldstack()
/Users/rsc/g/go/src/pkg/runtime/stack.c:159 +0x76
runtime.lessstack()
/Users/rsc/g/go/src/pkg/runtime/asm_amd64.s:270 +0x22
goroutine 1 [stack unsplit]:
fmt.(*pp).printArg(0x2102e64e0, 0xe5c80, 0x2102c9220, 0x73, 0x0, ...)
/Users/rsc/g/go/src/pkg/fmt/print.go:818 +0x3d3 fp=0x221031e6f8
fmt.(*pp).doPrintf(0x2102e64e0, 0x12fb20, 0x2, 0x221031eb98, 0x1, ...)
/Users/rsc/g/go/src/pkg/fmt/print.go:1183 +0x15cb fp=0x221031eaf0
fmt.Sprintf(0x12fb20, 0x2, 0x221031eb98, 0x1, 0x1, ...)
/Users/rsc/g/go/src/pkg/fmt/print.go:234 +0x67 fp=0x221031eb40
flag.(*stringValue).String(0x2102c9210, 0x1, 0x0)
/Users/rsc/g/go/src/pkg/flag/flag.go:180 +0xb3 fp=0x221031ebb0
flag.(*FlagSet).Var(0x2102f6000, 0x293d38, 0x2102c9210, 0x143490, 0xa, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:633 +0x40 fp=0x221031eca0
flag.(*FlagSet).StringVar(0x2102f6000, 0x2102c9210, 0x143490, 0xa, 0x12fa60, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:550 +0x91 fp=0x221031ece8
flag.(*FlagSet).String(0x2102f6000, 0x143490, 0xa, 0x12fa60, 0x0, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:563 +0x87 fp=0x221031ed38
flag.String(0x143490, 0xa, 0x12fa60, 0x0, 0x161950, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:570 +0x6b fp=0x221031ed80
testing.init()
/Users/rsc/g/go/src/pkg/testing/testing.go:-531 +0xbb fp=0x221031edc0
strings_test.init()
/Users/rsc/g/go/src/pkg/strings/strings_test.go:1115 +0x62 fp=0x221031ef70
main.init()
strings/_test/_testmain.go:90 +0x3d fp=0x221031ef78
runtime.main()
/Users/rsc/g/go/src/pkg/runtime/proc.c:180 +0x8a fp=0x221031efa0
runtime.goexit()
/Users/rsc/g/go/src/pkg/runtime/proc.c:1269 fp=0x221031efa8
goroutine 2 [runnable]:
runtime.MHeap_Scavenger()
/Users/rsc/g/go/src/pkg/runtime/mheap.c:438
runtime.goexit()
/Users/rsc/g/go/src/pkg/runtime/proc.c:1269
created by runtime.main
/Users/rsc/g/go/src/pkg/runtime/proc.c:166
rax 0x23ccc0
rbx 0x23ccc0
rcx 0x0
rdx 0x38
rdi 0x2102c0170
rsi 0x221032cfe0
rbp 0x221032cfa0
rsp 0x7fff5fbff5b0
r8 0x2102c0120
r9 0x221032cfa0
r10 0x221032c000
r11 0x104ce8
r12 0xe5c80
r13 0x1be82baac718
r14 0x13091135f7d69200
r15 0x0
rip 0x1b2a6
rflags 0x10246
cs 0x2b
fs 0x0
gs 0x0
Fixes#5723.
R=r, dvyukov, go.peter.90, dave, iant
CC=golang-dev
https://golang.org/cl/10360048