Add gostartcall and gostartcallfn.
The old gogocall = gostartcall + gogo.
The old gogocallfn = gostartcallfn + gogo.
R=dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/10036044
In starttheworld() we assume that P's with local work
are situated in the beginning of idle P list.
However, once we start the first M, it can execute all local G's
and steal G's from other P's.
That breaks the assumption above. Thus starttheworld() will fail
to start some P's with local work.
It seems that it can not lead to very bad things, but still
it's wrong and breaks other assumtions
(e.g. we can have a spinning M with local work).
The fix is to collect all P's with local work first,
and only then start them.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10051045
The garbage collection routine addframeroots is duplicating
logic in the traceback routine that calls it, sometimes correctly,
sometimes incorrectly, sometimes incompletely.
Pass necessary information to addframeroots instead of
deriving it anew.
Should make addframeroots significantly more robust.
It's certainly smaller.
Also try to standardize on uintptr for saved pc, sp values.
Will make CL 10036044 trivial.
R=golang-dev, dave, dvyukov
CC=golang-dev
https://golang.org/cl/10169045
This avoids problems with inlining in genwrappers, which
occurs after functions have been compiled. Compiling a
function may cause some unused local vars to be removed from
the list. Since a local var may be unused due to
optimization, it is possible that a removed local var winds up
beingused in the inlined version, in which case hilarity
ensues.
Fixes#5515.
R=golang-dev, khr, dave
CC=golang-dev
https://golang.org/cl/10210043
It was off in the old implementation (because there was no high-level
description of the function at all). Maybe some day the race detector
should be fixed to handle the wrapper and then enabled for it, but there's
no reason that has to be today.
R=golang-dev
TBR=dvyukov
CC=golang-dev
https://golang.org/cl/10037045
There's no reason to use a different name on each architecture,
and doing so makes it impossible for portable code to refer to
the original Go runtime entry point. Rename it _rt0_go everywhere.
This is a global search and replace only.
R=golang-dev, bradfitz, minux.ma
CC=golang-dev
https://golang.org/cl/10196043
This feature is not yet ready for real use. The CL marks a bite-sized
piece that is ready for review. TODOs that remain:
provide control over output
produce output without setting -v
make work on reflect, sync and time packages
(fail now due to link errors caused by inlining)
better documentation
Almost all packages work now, though, if clumsily; try:
go test -v -cover=count encoding/binary
R=rsc
CC=gobot, golang-dev, remyoudompheng
https://golang.org/cl/10050045
Requires adding new linker instruction
RET f(SB)
meaning return but then immediately call f.
This is what you'd use to implement a tail call after
fiddling with the arguments, but the compiler only
uses it in genwrapper.
This CL eliminates the copy-and-paste genembedtramp
functions from 5g/8g/6g and makes the code run on ARM
for the first time. It removes a small special case for function
generation, which should help Carl a bit, but at the same time
it does not bother to implement general tail call optimization,
which we do not want anyway.
Fixes#5627.
R=ken2
CC=golang-dev
https://golang.org/cl/10057044
The first identifier in an Object Identifer must be between 0 and 2
inclusive. The range of values that the second one can take depends
on the value of the first one.
The two first identifiers are not necessarily encoded in a single octet,
but in a varint.
R=golang-dev, agl
CC=golang-dev
https://golang.org/cl/10140046
The new code matches the code in cc/lex.c and the #define GETC.
This was causing problems scanning runtime·foo if the leading
· byte was returned by the buffer fill.
R=ken2
CC=golang-dev
https://golang.org/cl/10167043
Do not synchronize Add(1) with Wait().
Imitate read on first Add(1) and write on Wait(),
it allows to catch common misuses of WaitGroup:
- Add() called in the additional goroutine itself
- incorrect reuse of WaitGroup with multiple waiters
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/10093044
Also reduce FixAlloc allocation granulatiry from 128k to 16k,
small programs do not need that much memory for MCache's and MSpan's.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/10140044
Especially important for Windows because it reserves VM
only in multiple of 64k.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/10082048
Count only number of frees, everything else is derivable
and does not need to be counted on every malloc.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 68 66 -3.07%
BenchmarkMalloc16 75 70 -6.48%
BenchmarkMallocTypeInfo8 102 97 -4.80%
BenchmarkMallocTypeInfo16 108 105 -2.78%
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/9776043
CFLAGS comes before CPPFLAGS.
Also fix one typo CPPCFLAGS.
Cleanup for CL 8248043.
R=golang-dev, iant, alberto.garcia.hierro
CC=golang-dev
https://golang.org/cl/9965045
The significant change between TLS 1.0 and 1.1 is the addition of an explicit IV in the case of CBC encrypted records. Support for TLS 1.1 is needed in order to support TLS 1.2.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/7880043
Changeset 7557a627e9b5 added a temporary stop-gap to silence
a print format warning for %S. This has been reverted.
None of this code is original. It was copied from the latest
Plan 9 compilers.
R=golang-dev, r, rsc
CC=golang-dev
https://golang.org/cl/8630044
Each of the backends has two prototypes for this function but
no corresponding definition.
R=golang-dev, bradfitz, khr
CC=golang-dev
https://golang.org/cl/9930045
These two symbols don't show up in the Go symbol table
since they're defined in dodata which is called sometime
after symtab. They do, however, show up in the ELF symbol
table.
This regression was introduced in changeset 01c40d533367.
Also, remove the corresponding strings from the ELF strtab
section now that they're unused.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/8650043
Remove unnecessary ( ) around == in && clause.
Add { } around multiline if body, even though it's one statement.
Add runtime: prefix to printed errors.
R=cshapiro, iant
CC=golang-dev
https://golang.org/cl/9685047
This is part of preemptive scheduler.
stackguard0 is checked in split stack checks and can be set to StackPreempt.
stackguard is not set to StackPreempt (holds the original value).
R=golang-dev, daniel.morsing, iant
CC=golang-dev
https://golang.org/cl/9875043
Fixes#5599.
Thanks to minux.ma for the suggested fix.
As we now have a harness to test testing internal functions I added some coverage for testing.roundUp, as it is the main consumer of roundDown10.
R=minux.ma, kr, r
CC=golang-dev
https://golang.org/cl/9926043
Before this change, grow work was done only
during map writes to ensure multithreaded safety.
This can lead to maps remaining in a partially
grown state for a long time, potentially forever.
This change allows grow work to happen during reads,
which will lead to grow work finishing sooner, making
the resulting map smaller and faster.
Grow work is not done in parallel. Reads can
happen in parallel while grow work is happening.
R=golang-dev, dvyukov, khr, iant
CC=golang-dev
https://golang.org/cl/8852047
instead of regular g stack. We do this so that the g stack
we're currently running on is no longer changing. Cuts
the root set down a bit (g0 stacks are not scanned, and
we don't need to scan gc's internal state). Also an
enabler for copyable stacks.
R=golang-dev, cshapiro, khr, 0xe2.0x9a.0x9b, dvyukov, rsc, iant
CC=golang-dev
https://golang.org/cl/9754044
An embedded trampoline is a function that exists to marshal
a receiver of type *S to a receiver of type *T when T is an
embedded field in S.
Embedded trampolines are generated by a special path through
the compiler and are not subject to the general analysis and
annotation done to functions. Their effects must be provided
explicitly.
R=golang-dev, r, daniel.morsing, minux.ma
CC=golang-dev
https://golang.org/cl/9874043
* Add a CXXFiles field to Package, which includes .cc, .cpp and .cxx files.
* CXXFiles are compiled using g++, which can be overridden using the CXX environment variable.
* Include .hh, .hpp and .hxx files in HFiles.
* Add support for CPPFLAGS (used for both C and C++) and CXXFLAGS (used only for C++) in cgo directive.
* Changed pkg-config cgo directive to modify CPPFLAGS rather than CFLAGS, so both C and C++ files get any flag returned by pkg-config --cflags.
Fixes#1476.
R=iant, r
CC=bradfitz, gobot, golang-dev, iant, minux.ma, remyoudompheng, seb.binet
https://golang.org/cl/8248043
mheap.map become a pointer, so nelem(h->map) returns 1 rather than the map size.
As the result coalescing with subsequent spans does not happen.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9649046
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
Reincarnation of committed and rolled back https://golang.org/cl/9805043
The latent bugs that it revealed are fixed:
https://golang.org/cl/9837049https://golang.org/cl/9778048
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9778049
Then use the limit to make sure MHeap_LookupMaybe & inlined
copies don't return a span if the pointer is beyond the limit.
Use this fact to optimize all call sites.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/9869045
As the code now says:
We are forced to return a float64 because the API is silly, but do
the division as integers so we can ask if AllocsPerRun()==1
instead of AllocsPerRun()<2.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/9837049
Escape analysis already gives that the underlying array
does not escape but the result was ignored.
Fixes#5484.
R=golang-dev, dave, daniel.morsing
CC=golang-dev
https://golang.org/cl/9662046
A nosplits was assumed to have no argument information and no
pointer map. However, nosplits created by the linker often
have both. This change uses the pointer map size as an
alternate source of argument size when processing a nosplit.
In addition, the symbol table construction pointer map size
and argument size consistency check is strengthened. If a
nptrs is greater than 0 it must be equal to the number of
argument words.
R=golang-dev, khr, khr
CC=golang-dev
https://golang.org/cl/9666047
to avoid unintentionally clobber R9/R10.
Thanks Lucio for the suggestion.
PS: yes, this could be considered a big change (but not an API change), but
as it turns out even temporarily changes R9/R10 in user code is unsafe and
leads to very hard to diagnose problems later, better to disable using R9/R10
when the user first uses it.
See CL 6300043 and CL 6305100 for two problems caused by misusing R9/R10.
R=golang-dev, khr, rsc
CC=golang-dev
https://golang.org/cl/9840043
The old code put the index before the period in the precision;
it should be after so it's always before the star, as documented.
A little trickier to do in one pass but compensated for by more
tests and catching a couple of other error cases.
R=rsc
CC=golang-dev
https://golang.org/cl/9751044
Currently we only check the leaf node's issuer against the list of
distinguished names in the server's CertificateRequest message. This
will fail if the client certiciate has more than one certificate in
the path and the leaf node issuer isn't in the list of distinguished
names, but the issuer's issuer was in the distinguished names.
R=agl, agl
CC=gobot, golang-dev
https://golang.org/cl/9795043
This is needed for preemptive scheduler, because during
stoptheworld we want to wait with timeout and re-preempt
M's on timeout.
R=golang-dev, remyoudompheng, iant
CC=golang-dev
https://golang.org/cl/9375043
With this change the compiler emits a bitmap for each function
covering its stack frame arguments area. If an argument word
is known to contain a pointer, a bit is set. The garbage
collector reads this information when scanning the stack by
frames and uses it to ignores locations known to not contain a
pointer.
R=golang-dev, bradfitz, daniel.morsing, dvyukov, khr, khr, iant, cshapiro
CC=golang-dev
https://golang.org/cl/9223046
This depends on: 9791044: runtime: allocate page table lazily
Once page table is moved out of heap, the heap becomes small.
This removes unnecessary dereferences during heap access.
No logical changes.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9802043
This removes the 256MB memory allocation at startup,
which conflicts with ulimit.
Also will allow to eliminate an unnecessary memory dereference in GC,
because the page table is usually mapped at known address.
Update #5049.
Update #5236.
R=golang-dev, khr, r, khr, rsc
CC=golang-dev
https://golang.org/cl/9791044
Currently the test closes random files descriptors,
which leads to hang (in particular if netpoll fd is closed).
Try to open only fd 3, since the parent process expects it to be fd 3 anyway.
Fixes#5571.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/9778048
The 'n' variable is used during rescan initiation in GC_END case,
but it's overwritten with chan capacity in GC_CHAN case.
As the result rescan is done with the wrong object size.
Fixes#5554.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9831043
multiple failures on amd64
««« original CL description
runtime: introduce helper persistentalloc() function
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
»»»
R=golang-dev
CC=golang-dev
https://golang.org/cl/9822043
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
Variables in data sections of 32-bit executables interfere with
garbage collector's ability to free objects and/or unnecessarily
slow down the garbage collector.
This changeset moves some static variables to .noptr sections.
'files' in symtab.c is now allocated dynamically.
R=golang-dev, dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/9786044
This text is added to doc.go:
Explicit argument indexes:
In Printf, Sprintf, and Fprintf, the default behavior is for each
formatting verb to format successive arguments passed in the call.
However, the notation [n] immediately before the verb indicates that the
nth one-indexed argument is to be formatted instead. The same notation
before a '*' for a width or precision selects the argument index holding
the value. After processing a bracketed expression [n], arguments n+1,
n+2, etc. will be processed unless otherwise directed.
For example,
fmt.Sprintf("%[2]d %[1]d\n", 11, 22)
will yield "22, 11", while
fmt.Sprintf("%[3]*[2].*[1]f", 12.0, 2, 6),
equivalent to
fmt.Sprintf("%6.2f", 12.0),
will yield " 12.00". Because an explicit index affects subsequent verbs,
this notation can be used to print the same values multiple times
by resetting the index for the first argument to be repeated:
fmt.Sprintf("%d %d %#[1]x %#x", 16, 17)
will yield "16 17 0x10 0x11".
The notation chosen differs from that in C, but I believe it's easier to read
and to remember (we're indexing the arguments), and compatibility with
C's printf was never a strong goal anyway.
While we're here, change the word "field" to "arg" or "argument" in the
code; it was being misused and was confusing.
R=rsc, bradfitz, rogpeppe, minux.ma, peter.armitage
CC=golang-dev
https://golang.org/cl/9680043
Set $status as null to prevent rc from exiting
on the last --no-banner argument checking when
used with rc -e. It allows all.rc to not exit
before executing run.rc
R=golang-dev, lucio.dere, rsc
CC=golang-dev
https://golang.org/cl/9611045
crypto/x509 has ended up with a variety of error formats. This change makes them all start with "x509: ".
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/9736043
It doesn't work, it's not portable, it's not part of the released
binaries, and a better tool is due.
Fixes#1319.
Fixes#4621.
R=golang-dev, bradfitz, dave, rsc
CC=golang-dev
https://golang.org/cl/9681044
According to X.690, only 0 and 255 are allowed as values
for encoded booleans. Also added some test for parsing
booleans
R=golang-dev, agl, r
CC=golang-dev
https://golang.org/cl/9692043
The problem was that server handlers block on done<-,
the goroutine that reads from done blocks on count<-,
and the main goroutine that is supposed to read from count
waits for server handlers to exit.
Fixes#5547.
R=golang-dev, dave, bradfitz
CC=golang-dev
https://golang.org/cl/9722043
This only affects calls where both ReaderFrom and WriterTo are implemented. WriterTo can issue one large write, while ReaderFrom must Read until EOF, potentially reallocating when out of memory. With one large Write, the Writer only needs to allocate once.
This also helps in ioutil.Discard since we can avoid copying memory when the Reader implements WriterTo.
R=golang-dev, dsymonds, remyoudompheng, bradfitz
CC=golang-dev, minux.ma
https://golang.org/cl/9462044
This change contains an implementation of the RSASSA-PSS signature
algorithm described in RFC 3447.
R=agl, agl
CC=gobot, golang-dev, r
https://golang.org/cl/9438043
This CL adds missing IPv6 socket options which are required
to control IPv6 as described in RFC 3493, RFC 3542.
Update #5538
R=golang-dev, dave, iant
CC=golang-dev
https://golang.org/cl/9373046
1) go/doc:
- create correct ast.FuncType
- use more commonly used variable names in a test case
2) make ast.FuncType.Pos robust in case of incorrect ASTs
R=golang-dev
CC=golang-dev
https://golang.org/cl/9651044
If a union contains a pointer, it will mess up the garbage collector, causing memory corruption.
R=golang-dev, dave, nightlyone, adg, dvyukov, bradfitz, minux.ma, r, iant
CC=golang-dev
https://golang.org/cl/8469043
This is needed for preemptive scheduler, because the goroutine
can be preempted at surprising points.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/9376043
When cgo is used, runtime creates an additional M to handle callbacks on threads not created by Go.
This effectively disabled deadlock detection, which is a right thing, because Go program can be blocked
and only serve callbacks on external threads.
This also disables deadlock detection under race detector, because it happens to use cgo.
With this change the additional M is created lazily on first cgo call. So deadlock detector
works for programs that import "C", "net" or "net/http/pprof" but do not use them in fact.
Also fixes deadlock detector under race detector.
It should be fine to create the M later, because C code can not call into Go before first cgo call,
because C code does not know when Go initialization has completed. So a Go program need to call into C
first either to create an external thread, or notify a thread created in global ctor that Go
initialization has completed.
Fixes#4973.
Fixes#5475.
R=golang-dev, minux.ma, iant
CC=golang-dev
https://golang.org/cl/9303046
The original code was correct. The count returned must be the length
of the input slice, not the length of the formatted message.
««« original CL description
log/syslog: report errors from Fprintf
Thanks to chiparus for identifying this.
Fixes#5541.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/9658043
»»»
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/9644044
Currently per-sizeclass stats are lost for destroyed MCache's. This patch fixes this.
Also, only update mstats.heap_alloc on heap operations, because that's the only
stat that needs to be promptly updated. Everything else needs to be up-to-date only in ReadMemStats().
R=golang-dev, remyoudompheng, dave, iant
CC=golang-dev
https://golang.org/cl/9207047
The nlistmin/size thresholds are copied from tcmalloc,
but are unnecesary for Go malloc. We do not do explicit
frees into MCache. For sparse cases when we do (mainly hashmap),
simpler logic will do.
R=rsc, dave, iant
CC=gobot, golang-dev, r, remyoudompheng
https://golang.org/cl/9373043
A bufio.Writer.Flush marks the usual end of a Writer's
life. Recycle its internal buffer on those explicit flushes,
but not on normal, as-needed internal flushes.
benchmark old ns/op new ns/op delta
BenchmarkWriterEmpty 1959 727 -62.89%
benchmark old allocs new allocs delta
BenchmarkWriterEmpty 2 1 -50.00%
benchmark old bytes new bytes delta
BenchmarkWriterEmpty 4215 83 -98.03%
R=gri, iant
CC=gobot, golang-dev, voidlogic7
https://golang.org/cl/9459044
This should have been removed in 45c12efb4635. Not a correctness
issue, but unnecessary work.
This CL also adds paranoia checks in removeDep so this doesn't
happen again.
Fixes#5502
R=adg
CC=gobot, golang-dev, google
https://golang.org/cl/9543043
undo CL 8478044 / 0d28fd55e721
Lack of consensus.
««« original CL description
time: add Time.FormatAppend
This is a version of Time.Format that doesn't require allocation.
Fixes#5192
Update #5195
R=r
CC=gobot, golang-dev
https://golang.org/cl/8478044
»»»
R=r
CC=golang-dev
https://golang.org/cl/9462049