Requires adding new linker instruction
RET f(SB)
meaning return but then immediately call f.
This is what you'd use to implement a tail call after
fiddling with the arguments, but the compiler only
uses it in genwrapper.
This CL eliminates the copy-and-paste genembedtramp
functions from 5g/8g/6g and makes the code run on ARM
for the first time. It removes a small special case for function
generation, which should help Carl a bit, but at the same time
it does not bother to implement general tail call optimization,
which we do not want anyway.
Fixes#5627.
R=ken2
CC=golang-dev
https://golang.org/cl/10057044
The first identifier in an Object Identifer must be between 0 and 2
inclusive. The range of values that the second one can take depends
on the value of the first one.
The two first identifiers are not necessarily encoded in a single octet,
but in a varint.
R=golang-dev, agl
CC=golang-dev
https://golang.org/cl/10140046
The new code matches the code in cc/lex.c and the #define GETC.
This was causing problems scanning runtime·foo if the leading
· byte was returned by the buffer fill.
R=ken2
CC=golang-dev
https://golang.org/cl/10167043
Do not synchronize Add(1) with Wait().
Imitate read on first Add(1) and write on Wait(),
it allows to catch common misuses of WaitGroup:
- Add() called in the additional goroutine itself
- incorrect reuse of WaitGroup with multiple waiters
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/10093044
Also reduce FixAlloc allocation granulatiry from 128k to 16k,
small programs do not need that much memory for MCache's and MSpan's.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/10140044
Especially important for Windows because it reserves VM
only in multiple of 64k.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/10082048
Count only number of frees, everything else is derivable
and does not need to be counted on every malloc.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 68 66 -3.07%
BenchmarkMalloc16 75 70 -6.48%
BenchmarkMallocTypeInfo8 102 97 -4.80%
BenchmarkMallocTypeInfo16 108 105 -2.78%
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/9776043
CFLAGS comes before CPPFLAGS.
Also fix one typo CPPCFLAGS.
Cleanup for CL 8248043.
R=golang-dev, iant, alberto.garcia.hierro
CC=golang-dev
https://golang.org/cl/9965045
The significant change between TLS 1.0 and 1.1 is the addition of an explicit IV in the case of CBC encrypted records. Support for TLS 1.1 is needed in order to support TLS 1.2.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/7880043
Changeset 7557a627e9b5 added a temporary stop-gap to silence
a print format warning for %S. This has been reverted.
None of this code is original. It was copied from the latest
Plan 9 compilers.
R=golang-dev, r, rsc
CC=golang-dev
https://golang.org/cl/8630044
Each of the backends has two prototypes for this function but
no corresponding definition.
R=golang-dev, bradfitz, khr
CC=golang-dev
https://golang.org/cl/9930045
These two symbols don't show up in the Go symbol table
since they're defined in dodata which is called sometime
after symtab. They do, however, show up in the ELF symbol
table.
This regression was introduced in changeset 01c40d533367.
Also, remove the corresponding strings from the ELF strtab
section now that they're unused.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/8650043
Remove unnecessary ( ) around == in && clause.
Add { } around multiline if body, even though it's one statement.
Add runtime: prefix to printed errors.
R=cshapiro, iant
CC=golang-dev
https://golang.org/cl/9685047
This is part of preemptive scheduler.
stackguard0 is checked in split stack checks and can be set to StackPreempt.
stackguard is not set to StackPreempt (holds the original value).
R=golang-dev, daniel.morsing, iant
CC=golang-dev
https://golang.org/cl/9875043
Fixes#5599.
Thanks to minux.ma for the suggested fix.
As we now have a harness to test testing internal functions I added some coverage for testing.roundUp, as it is the main consumer of roundDown10.
R=minux.ma, kr, r
CC=golang-dev
https://golang.org/cl/9926043
Before this change, grow work was done only
during map writes to ensure multithreaded safety.
This can lead to maps remaining in a partially
grown state for a long time, potentially forever.
This change allows grow work to happen during reads,
which will lead to grow work finishing sooner, making
the resulting map smaller and faster.
Grow work is not done in parallel. Reads can
happen in parallel while grow work is happening.
R=golang-dev, dvyukov, khr, iant
CC=golang-dev
https://golang.org/cl/8852047
instead of regular g stack. We do this so that the g stack
we're currently running on is no longer changing. Cuts
the root set down a bit (g0 stacks are not scanned, and
we don't need to scan gc's internal state). Also an
enabler for copyable stacks.
R=golang-dev, cshapiro, khr, 0xe2.0x9a.0x9b, dvyukov, rsc, iant
CC=golang-dev
https://golang.org/cl/9754044
An embedded trampoline is a function that exists to marshal
a receiver of type *S to a receiver of type *T when T is an
embedded field in S.
Embedded trampolines are generated by a special path through
the compiler and are not subject to the general analysis and
annotation done to functions. Their effects must be provided
explicitly.
R=golang-dev, r, daniel.morsing, minux.ma
CC=golang-dev
https://golang.org/cl/9874043
* Add a CXXFiles field to Package, which includes .cc, .cpp and .cxx files.
* CXXFiles are compiled using g++, which can be overridden using the CXX environment variable.
* Include .hh, .hpp and .hxx files in HFiles.
* Add support for CPPFLAGS (used for both C and C++) and CXXFLAGS (used only for C++) in cgo directive.
* Changed pkg-config cgo directive to modify CPPFLAGS rather than CFLAGS, so both C and C++ files get any flag returned by pkg-config --cflags.
Fixes#1476.
R=iant, r
CC=bradfitz, gobot, golang-dev, iant, minux.ma, remyoudompheng, seb.binet
https://golang.org/cl/8248043
mheap.map become a pointer, so nelem(h->map) returns 1 rather than the map size.
As the result coalescing with subsequent spans does not happen.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9649046
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
Reincarnation of committed and rolled back https://golang.org/cl/9805043
The latent bugs that it revealed are fixed:
https://golang.org/cl/9837049https://golang.org/cl/9778048
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9778049
Then use the limit to make sure MHeap_LookupMaybe & inlined
copies don't return a span if the pointer is beyond the limit.
Use this fact to optimize all call sites.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/9869045
As the code now says:
We are forced to return a float64 because the API is silly, but do
the division as integers so we can ask if AllocsPerRun()==1
instead of AllocsPerRun()<2.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/9837049
Escape analysis already gives that the underlying array
does not escape but the result was ignored.
Fixes#5484.
R=golang-dev, dave, daniel.morsing
CC=golang-dev
https://golang.org/cl/9662046
A nosplits was assumed to have no argument information and no
pointer map. However, nosplits created by the linker often
have both. This change uses the pointer map size as an
alternate source of argument size when processing a nosplit.
In addition, the symbol table construction pointer map size
and argument size consistency check is strengthened. If a
nptrs is greater than 0 it must be equal to the number of
argument words.
R=golang-dev, khr, khr
CC=golang-dev
https://golang.org/cl/9666047
to avoid unintentionally clobber R9/R10.
Thanks Lucio for the suggestion.
PS: yes, this could be considered a big change (but not an API change), but
as it turns out even temporarily changes R9/R10 in user code is unsafe and
leads to very hard to diagnose problems later, better to disable using R9/R10
when the user first uses it.
See CL 6300043 and CL 6305100 for two problems caused by misusing R9/R10.
R=golang-dev, khr, rsc
CC=golang-dev
https://golang.org/cl/9840043
The old code put the index before the period in the precision;
it should be after so it's always before the star, as documented.
A little trickier to do in one pass but compensated for by more
tests and catching a couple of other error cases.
R=rsc
CC=golang-dev
https://golang.org/cl/9751044
Currently we only check the leaf node's issuer against the list of
distinguished names in the server's CertificateRequest message. This
will fail if the client certiciate has more than one certificate in
the path and the leaf node issuer isn't in the list of distinguished
names, but the issuer's issuer was in the distinguished names.
R=agl, agl
CC=gobot, golang-dev
https://golang.org/cl/9795043
This is needed for preemptive scheduler, because during
stoptheworld we want to wait with timeout and re-preempt
M's on timeout.
R=golang-dev, remyoudompheng, iant
CC=golang-dev
https://golang.org/cl/9375043
With this change the compiler emits a bitmap for each function
covering its stack frame arguments area. If an argument word
is known to contain a pointer, a bit is set. The garbage
collector reads this information when scanning the stack by
frames and uses it to ignores locations known to not contain a
pointer.
R=golang-dev, bradfitz, daniel.morsing, dvyukov, khr, khr, iant, cshapiro
CC=golang-dev
https://golang.org/cl/9223046
This depends on: 9791044: runtime: allocate page table lazily
Once page table is moved out of heap, the heap becomes small.
This removes unnecessary dereferences during heap access.
No logical changes.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9802043
This removes the 256MB memory allocation at startup,
which conflicts with ulimit.
Also will allow to eliminate an unnecessary memory dereference in GC,
because the page table is usually mapped at known address.
Update #5049.
Update #5236.
R=golang-dev, khr, r, khr, rsc
CC=golang-dev
https://golang.org/cl/9791044
Currently the test closes random files descriptors,
which leads to hang (in particular if netpoll fd is closed).
Try to open only fd 3, since the parent process expects it to be fd 3 anyway.
Fixes#5571.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/9778048
The 'n' variable is used during rescan initiation in GC_END case,
but it's overwritten with chan capacity in GC_CHAN case.
As the result rescan is done with the wrong object size.
Fixes#5554.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9831043
multiple failures on amd64
««« original CL description
runtime: introduce helper persistentalloc() function
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
»»»
R=golang-dev
CC=golang-dev
https://golang.org/cl/9822043
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
Variables in data sections of 32-bit executables interfere with
garbage collector's ability to free objects and/or unnecessarily
slow down the garbage collector.
This changeset moves some static variables to .noptr sections.
'files' in symtab.c is now allocated dynamically.
R=golang-dev, dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/9786044
This text is added to doc.go:
Explicit argument indexes:
In Printf, Sprintf, and Fprintf, the default behavior is for each
formatting verb to format successive arguments passed in the call.
However, the notation [n] immediately before the verb indicates that the
nth one-indexed argument is to be formatted instead. The same notation
before a '*' for a width or precision selects the argument index holding
the value. After processing a bracketed expression [n], arguments n+1,
n+2, etc. will be processed unless otherwise directed.
For example,
fmt.Sprintf("%[2]d %[1]d\n", 11, 22)
will yield "22, 11", while
fmt.Sprintf("%[3]*[2].*[1]f", 12.0, 2, 6),
equivalent to
fmt.Sprintf("%6.2f", 12.0),
will yield " 12.00". Because an explicit index affects subsequent verbs,
this notation can be used to print the same values multiple times
by resetting the index for the first argument to be repeated:
fmt.Sprintf("%d %d %#[1]x %#x", 16, 17)
will yield "16 17 0x10 0x11".
The notation chosen differs from that in C, but I believe it's easier to read
and to remember (we're indexing the arguments), and compatibility with
C's printf was never a strong goal anyway.
While we're here, change the word "field" to "arg" or "argument" in the
code; it was being misused and was confusing.
R=rsc, bradfitz, rogpeppe, minux.ma, peter.armitage
CC=golang-dev
https://golang.org/cl/9680043