Cleanup before converting to Go.
Fortunately nobody using it, because it is incorrect:
monotonic runtime time instead of claimed real time.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rsc
https://golang.org/cl/129480043
These are required for chans, semaphores, timers, etc.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/123640043
UndefinedBehaviorSanitizer claims it is UB in C:
src/cmd/gc/racewalk.c:422:37: runtime error: member access within null pointer of type 'Node' (aka 'struct Node')
src/cmd/gc/racewalk.c:423:37: runtime error: member access within null pointer of type 'Node' (aka 'struct Node')
LGTM=rsc
R=dave, rsc
CC=golang-codereviews
https://golang.org/cl/125570043
Init GC later as it needs to read GOGC env var.
Fixes#8562.
LGTM=daniel.morsing, rsc
R=golang-codereviews, daniel.morsing, rsc
CC=golang-codereviews, khr, rlh
https://golang.org/cl/130990043
Calling ReadMemStats which does stoptheworld on m0 holding locks
was not a good idea.
Stoptheworld holding locks is a recipe for deadlocks (added check for this).
Stoptheworld on g0 may or may not work (added check for this as well).
As far as I understand scavenger will print incorrect numbers now,
as stack usage is not subtracted from heap. But it's better than deadlocking.
LGTM=khr
R=golang-codereviews, rsc, khr
CC=golang-codereviews, rlh
https://golang.org/cl/124670043
zsyscall_windows_386.go and zsyscall_windows_amd64.go contain same bytes
LGTM=r
R=golang-codereviews, r
CC=golang-codereviews
https://golang.org/cl/124640043
The existing lock needed to be held longer. If a timeout occured
while writing (but after the guarded timeout check), the writes
would clobber a future connection's buffer.
Also remove a harmless warning by making Write also set the
flag that headers were sent (implicitly), so we don't try to
write headers later (a no-op + warning) on timeout after we've
started writing.
Fixes#8414Fixes#8209
LGTM=ruiu, adg
R=adg, ruiu
CC=golang-codereviews
https://golang.org/cl/123610043
Half the code in the garbage collector accesses the bitmap
as an array of bytes instead of as an array of uintptrs.
This is tricky to do correctly in a portable fashion,
it breaks on big-endian systems.
Make the bitmap a byte array.
Simplifies markallocated, scanblock and span sweep along the way,
as we don't need to recalculate bitmap position for each word.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/125250043
We allocate scannable memory w/o type only in few places in runtime.
All these cases are not-performance critical (e.g. G or finq args buffer),
and in long term they all need to go away.
It's not worth it to have special code for this case in mallocgc.
So use special fake "notype" type for such allocations.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/127450044
Currently goroutines in onM can't be copied/shrunk
(including the very goroutine that triggers GC).
Special case onM to allow copying.
LGTM=daniel.morsing, khr
R=golang-codereviews, daniel.morsing, khr, rsc
CC=golang-codereviews, rlh
https://golang.org/cl/124550043
Newly allocated memory is subtracted from inuse, while it was never added to inuse.
Span leftovers are subtracted from both inuse and idle,
while they were never added.
Fixes#8544.
Fixes#8430.
LGTM=khr, cookieo9
R=golang-codereviews, khr, cookieo9
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/130200044
We need to change the interface value representation for
concurrent garbage collection, so that there is no ambiguity
about whether the data word holds a pointer or scalar.
This CL does NOT make any representation changes.
Instead, it removes representation assumptions from
various pieces of code throughout the tree.
The isdirectiface function in cmd/gc/subr.c is now
the only place that decides that policy.
The policy propagates out from there in the reflect
metadata, as a new flag in the internal kind value.
A follow-up CL will change the representation by
changing the isdirectiface function. If that CL causes
problems, it will be easy to roll back.
Update #8405.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews, r
https://golang.org/cl/129090043
The change to pc-relative addressing will make this illegal.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews, r
https://golang.org/cl/129890043
Update #8527
Fixes, cmd/6g/reg.c:847:24: runtime error: left shift of 1 by 31 places cannot be represented in type 'int'
LGTM=minux, rsc
R=minux, rsc
CC=dvyukov, golang-codereviews
https://golang.org/cl/129290043
type T byte
func (T) String() string { return "X" }
fmt.Sprintf("%s", []T{97, 98, 99, 100}) == "abcd"
fmt.Sprintf("%x", []T{97, 98, 99, 100}) == "61626364"
fmt.Sprintf("%v", []T{97, 98, 99, 100}) == "[X X X X]"
This change makes the last case print correctly.
Before, it would have been "[97 98 99 100]".
Fixes#8360.
LGTM=r
R=r, dan.kortschak
CC=golang-codereviews
https://golang.org/cl/129330043
Improve performance of move-to-front by using cache-friendly
copies instead of doubly-linked list. Simplify so that the
underlying slice is the object. Remove the n=0 special case,
which was actually slower with the copy approach.
benchmark old ns/op new ns/op delta
BenchmarkDecodeDigits 26429714 23859699 -9.72%
BenchmarkDecodeTwain 76684510 67591946 -11.86%
benchmark old MB/s new MB/s speedup
BenchmarkDecodeDigits 1.63 1.81 1.11x
BenchmarkDecodeTwain 1.63 1.85 1.13x
Updates #6754.
LGTM=adg, agl, josharian
R=adg, agl, josharian
CC=golang-codereviews
https://golang.org/cl/131840043
Currently we do the following dance after sweeping a span:
1. lock mcentral
2. remove the span from a list
3. unlock mcentral
4. unmark span
5. lock mheap
6. insert the span into heap
7. unlock mheap
8. lock mcentral
9. observe empty list
10. unlock mcentral
11. lock mheap
12. grab the span
13. unlock mheap
14. mark span
15. lock mcentral
16. insert the span into empty list
17. unlock mcentral
This change short-circuits this sequence to nothing,
that is, we just cache and use the span after sweeping.
This gives us functionality similar (even better) to tcmalloc's transfer cache.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 22.2 19.5 -12.16%
BenchmarkMalloc16 31.0 26.6 -14.19%
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/119550043
Mallocgc must be atomic wrt GC, but for performance reasons
don't acquirem/releasem on fast path. The code does not have
split stack checks, so it can't be preempted by GC.
Functions like roundup/add are inlined. And onM/racemalloc are nosplit.
Also add debug code that checks these assumptions.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 20.5 17.2 -16.10%
BenchmarkMalloc16 29.5 27.0 -8.47%
BenchmarkMallocTypeInfo8 31.5 27.6 -12.38%
BenchmarkMallocTypeInfo16 34.7 30.9 -10.95%
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/123100043