We return memory to the kernel with madvise(..., DONTNEED).
Also mark returned memory with NOHUGEPAGE to keep the kernel from
merging this memory into a huge page, effectively reallocating it.
Only known to be a problem on linux/{386,amd64,amd64p32} at the moment.
It may come up on other os/arch combinations in the future.
Fixes#8832
Change-Id: Ifffc6627a0296926e3f189a8a9b6e4bdb54c79eb
Reviewed-on: https://go-review.googlesource.com/5660
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
We need to distinguish pointers to free spans, which indicate bugs in
our pointer analysis, from pointers to never-in-the-heap spans, which
can legitimately arise from sysAlloc/mmap/etc. This normally isn't a
problem because the heap is contiguous, but in some situations (32
bit, particularly) the heap must grow around an already allocated
region.
The bad pointer test is disabled so this fix doesn't actually do
anything, but it removes one barrier from reenabling it.
Fixes#9872.
Change-Id: I0a92db4d43b642c58d2b40af69c906a8d9777f88
Reviewed-on: https://go-review.googlesource.com/5780
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Each architecture had its own Dconv (operand printer) but the syntax is
close to uniform and the code overlap was considerable. Consolidate these
into a single top-level function. A similar but smaller unification is done
for Mconv ("Name" formatter) as well.
The signature is changed. The flag was unused so drop it. Add a
function argument, Rconv, that must be supplied by the caller.
TODO: A future change will unify Rconv as well and this argument
will go away.
Some formats changed, because of the automatic consistency
created by unification. For instance, 0(R1) always prints as (R1)
now, and foo+0(SB) is just foo(SB). Before, some made these
simplifications and some didn't; now they all do.
Update the asm tests that depend on the format.
Change-Id: I6e3310bc19814c0c784ff0b960a154521acd9532
Reviewed-on: https://go-review.googlesource.com/5920
Reviewed-by: Russ Cox <rsc@golang.org>
Available darwin/arm devices sporadically have trouble mapping 256M.
I would really appreciate it if anyone could check my working on
this, and make sure sure there aren't obviously bad consequences I
haven't considered.
Change-Id: Id1a8edae104d974fcf5f9333274f958625467f79
Reviewed-on: https://go-review.googlesource.com/5752
Reviewed-by: Keith Randall <khr@golang.org>
Also introduce actual data structure for table.
Change-Id: I6bbe9aff8a872ae254f3739ae4ca17f7b5c4507a
Reviewed-on: https://go-review.googlesource.com/5701
Reviewed-by: Rob Pike <r@golang.org>
The dummy implementation was causing lots of argument lists
to be prepared and thrown away.
Change-Id: Id0040dec6b0937f3daa8a8d8911fa3280123e863
Reviewed-on: https://go-review.googlesource.com/5700
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
verifyAsm is still on, but this CL changes the order to asm then 6a.
Before, it was 6a then asm, but that meant that any bugs in asm
for bad input would be prevented from happening because 6a would
catch them. Now asm gets first crack, as it must.
Also implement the -trimpath flag in asm. It's necessary and trivial.
Change-Id: Ifb2ab870de1aa1b53dec76a78ac697a0d36fa80a
Reviewed-on: https://go-review.googlesource.com/5850
Reviewed-by: Russ Cox <rsc@golang.org>
Missing cases for JMP $4 and foo+4(SB):AX. Both are odd but 8a accepts them
and they seem valid.
Change-Id: Ic739f626fcc79ace1eaf646c5dfdd96da59df165
Reviewed-on: https://go-review.googlesource.com/5693
Reviewed-by: Russ Cox <rsc@golang.org>
Since allglock is held in this function, there's no point to
tip-toeing around allgs. Just use a for-range loop.
Change-Id: I1ee61c7e8cac8b8ebc8107c0c22f739db5db9840
Reviewed-on: https://go-review.googlesource.com/5882
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Previously, we had three loops in the garbage collector that all
cleared the per-G GC flags. Consolidate these into one function.
This one function is designed to work in a concurrent setting. As a
result, it's slightly more expensive than the loops it replaces during
STW phases, but these happen at most twice per GC.
Change-Id: Id1ec0074fd58865eb0112b8a0547b267802d0df1
Reviewed-on: https://go-review.googlesource.com/5881
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
The loop in gcMark is redundant with the gcworkdone resetting
performed by markroot, which called a few lines later in gcMark.
Change-Id: Ie0a826a614ecfa79e6e6b866e8d1de40ba515856
Reviewed-on: https://go-review.googlesource.com/5880
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Package runtime's Go code was converted to directly call getcallerpc
and getcallersp in https://golang.org/cl/138740043, but the assembly
implementations were not removed.
Change-Id: Ib2eaee674d594cbbe799925aae648af782a01c83
Reviewed-on: https://go-review.googlesource.com/5901
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
NetBSD's semaphore implementation is derived from OpenBSD's, but has
subsequently diverged due to cleanups that were only applied to the
latter (https://golang.org/cl/137960043, https://golang.org/cl/5563).
This CL applies analogous cleanups for NetBSD.
Notably, we can also remove the scary NetBSD deadlock warning.
NetBSD's manual pages document that lwp_unpark on a not-yet-parked LWP
will cause that LWP's next lwp_park system call to return immediately,
so there's no race hazard.
Change-Id: Ib06844c420d2496ac289748eba13eb4700bbbbb2
Reviewed-on: https://go-review.googlesource.com/5564
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Joel Sing <jsing@google.com>
Updates #9974
The *at family of syscalls requires some constants to be defined in the
syscall package for linux. Add the necessary constants and regenerate
the ztypes_linux_*.go files.
Change-Id: I6df343fef7bcacad30d36c7900dbfb621465a4fe
Reviewed-on: https://go-review.googlesource.com/5836
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
(gdb) p x
Python Exception <class 'gdb.error'> There is no member named b.:
$2 = map[string]string
->
(gdb) p x
$1 = map[string]string = {["shane"] = "hansen"}
Change-Id: I874d02a029f2ac9afc5ab666afb65760ec2c3177
Reviewed-on: https://go-review.googlesource.com/5522
Reviewed-by: Ian Lance Taylor <iant@golang.org>
OpenBSD's thrsleep system call includes an "abort" parameter, which
specifies a memory address to be tested after being registered on the
sleep channel (i.e., capable of being woken up by thrwakeup). By
passing a pointer to waitsemacount for this parameter, we avoid race
conditions without needing a lock. Instead we just need to use
atomicload, cas, and xadd to mutate the semaphore count.
Change-Id: If9f2ab7cfd682da217f9912783cadea7e72283a8
Reviewed-on: https://go-review.googlesource.com/5563
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Joel Sing <jsing@google.com>
Updates #9974
This proposal moves the definition of Dup2 from the generic syscall_linux.go
to the GOOS specific variants. This is in preparation for the arm64 port.
For all existing platforms Dup2 is not affected. When arm64 is added we'll use
either a forwarding method to Dup3 or
//sysnb Dup2(oldfd int, newfd int) (err error) = SYS_DUP3
Because mksycall.pl does not sort symbols before generating the output file
the diff includes some unavoidable code moves as Dup2 is processed latter in
the run.
Discussion: https://groups.google.com/forum/#!topic/golang-dev/zpeFtN2z5Fc
Change-Id: Icdedf55bb29e749c4230e1ee371bf9d0bd0cfb38
Reviewed-on: https://go-review.googlesource.com/5835
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Dave Cheney <dave@cheney.net>
Updates #9974
This proposal moves the definition of Pipe an Pipe2 from the generic
syscall_linux.go to the GOOS specific variants. This is in preparation
for the arm64 port.
For platforms where pipe2(2) is not supported in the minimum 2.6.23 kernel,
amd64 and 386, we retain pipe(2). For all other platforms pipe(2) is removed
and Pipe forwards to pipe2(2).
Because mksycall.pl does not sort symbols before generating the output file
the diff includes some unavoidable code moves as Pipe and Pipe2 are processed
latter in the run.
Discussion: https://groups.google.com/forum/#!topic/golang-dev/zpeFtN2z5Fc
Change-Id: Ie26d6761eeb9760dbaff974ee8bc0d57a9ceaee4
Reviewed-on: https://go-review.googlesource.com/5833
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
This is a reproposal of CL 2957. This reproposal restricts the
scope of this change to just arm systems.
With respect to rsc's comments on 2957, on all my arm hosts they perform
the build significantly faster with this change in place.
Change-Id: Ie09be1a73d5bb777ec5bca3ba93ba73d5612d141
Reviewed-on: https://go-review.googlesource.com/5834
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
When GODEBUG=gctrace=2 two gcs are preformed. During the first gc
the stack scan sets the g's gcscanvalid and gcworkdone flags to true
indicating that the stacks have to be scanned and do not need to
be rescanned. These need to be reset to false for the second GC so the
stacks are rescanned, otherwise if the only pointer to an object is
on the stack it will not be discovered and the object will be freed.
Typically this will include the object that was just allocated in
the mallocgc call that initiated the GC.
Change-Id: Ic25163f4689905fd810c90abfca777324005c02f
Reviewed-on: https://go-review.googlesource.com/5861
Reviewed-by: Russ Cox <rsc@golang.org>
Rebuild the zsyscall_linux_*.go files in preperation for #9974
The only change is the ppc64/ppc64le files which were not rebuilt when
syscall.use was added.
Change-Id: I804c63731e4900c782025de04ea3585d99688958
Reviewed-on: https://go-review.googlesource.com/5831
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
- added a new field ast.EmptyStmt.Implicit to indicate explicit
or implicit semicolon
- fix ast.EmptyStmt.End() accordingly
- adjusted parser and added test case
Fixes#9979.
Change-Id: I72b0983b3a0cabea085598e1bf6c8df629776b57
Reviewed-on: https://go-review.googlesource.com/5720
Reviewed-by: Russ Cox <rsc@golang.org>
Missing leading A on names.
Change-Id: I6f3a66bdd3a21220f45a898f0822930b6a7bfa38
Reviewed-on: https://go-review.googlesource.com/5801
Reviewed-by: Rob Pike <r@golang.org>
The alias should exist for both 386 and amd64.
There were a few others missing as well. Add them.
Change-Id: Ia0c3e71abc79f67a7a66941c0d932a8d5d6e9989
Reviewed-on: https://go-review.googlesource.com/5800
Reviewed-by: Russ Cox <rsc@golang.org>
Previously, we didn't handle absolute DNS names in certificates the same
way as Chromium, and we probably shouldn't diverge from major browsers.
Change-Id: I56a3962ad1002f68b5dbd65ae90991b82c2f5629
Reviewed-on: https://go-review.googlesource.com/5692
Reviewed-by: Andrew Gerrand <adg@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
RFC 6125 now specifies that wildcards are only allowed for the leftmost
label in a pattern: https://tools.ietf.org/html/rfc6125#section-6.4.3.
This change updates Go to match the behaviour of major browsers in this
respect.
Fixes#9834.
Change-Id: I37c10a35177133624568f2e0cf2767533926b04a
Reviewed-on: https://go-review.googlesource.com/5691
Reviewed-by: Andrew Gerrand <adg@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Some servers which misunderstood the point of the CertificateRequest
message send huge reply records. These records are large enough that
they were considered “insane” by the TLS code and rejected.
This change removes the sanity test for record lengths. Although the
maxCiphertext test still remains, just above, which (roughly) enforces
the 16KB protocol limit on record sizes:
https://tools.ietf.org/html/rfc5246#section-6.2.1Fixes#8928.
Change-Id: Idf89a2561b1947325b7ddc2613dc2da638d7d1c9
Reviewed-on: https://go-review.googlesource.com/5690
Reviewed-by: Andrew Gerrand <adg@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
There was a missing continue that caused certificates with critical
certificate-policy extensions to be rejected. Additionally, that code
structure in general was prone to exactly that bug so I changed it
around to hopefully be more robust in the future.
Fixes#9964.
Change-Id: I58fc6ef3a84c1bd292a35b8b700f44ef312ec1c1
Reviewed-on: https://go-review.googlesource.com/5670
Reviewed-by: Andrew Gerrand <adg@golang.org>
iOS devices can only run tests serially.
Change-Id: I3f4e7abddf812a186895d9d5138999c8bded698f
Reviewed-on: https://go-review.googlesource.com/5751
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
Currently sync.Mutex is fully cooperative. That is, once contention is discovered,
the goroutine calls into scheduler. This is suboptimal as the resource can become
free soon after (especially if critical sections are short). Server software
usually runs at ~~50% CPU utilization, that is, switching to other goroutines
is not necessary profitable.
This change adds limited active spinning to sync.Mutex if:
1. running on a multicore machine and
2. GOMAXPROCS>1 and
3. there is at least one other running P and
4. local runq is empty.
As opposed to runtime mutex we don't do passive spinning,
because there can be work on global runq on on other Ps.
benchmark old ns/op new ns/op delta
BenchmarkMutexNoSpin 1271 1272 +0.08%
BenchmarkMutexNoSpin-2 702 683 -2.71%
BenchmarkMutexNoSpin-4 377 372 -1.33%
BenchmarkMutexNoSpin-8 197 190 -3.55%
BenchmarkMutexNoSpin-16 131 122 -6.87%
BenchmarkMutexNoSpin-32 170 164 -3.53%
BenchmarkMutexSpin 4724 4728 +0.08%
BenchmarkMutexSpin-2 2501 2491 -0.40%
BenchmarkMutexSpin-4 1330 1325 -0.38%
BenchmarkMutexSpin-8 684 684 +0.00%
BenchmarkMutexSpin-16 414 372 -10.14%
BenchmarkMutexSpin-32 559 469 -16.10%
BenchmarkMutex 19.1 19.1 +0.00%
BenchmarkMutex-2 81.6 54.3 -33.46%
BenchmarkMutex-4 143 100 -30.07%
BenchmarkMutex-8 154 156 +1.30%
BenchmarkMutex-16 140 159 +13.57%
BenchmarkMutex-32 141 163 +15.60%
BenchmarkMutexSlack 33.3 31.2 -6.31%
BenchmarkMutexSlack-2 122 97.7 -19.92%
BenchmarkMutexSlack-4 168 158 -5.95%
BenchmarkMutexSlack-8 152 158 +3.95%
BenchmarkMutexSlack-16 140 159 +13.57%
BenchmarkMutexSlack-32 146 162 +10.96%
BenchmarkMutexWork 154 154 +0.00%
BenchmarkMutexWork-2 89.2 89.9 +0.78%
BenchmarkMutexWork-4 139 86.1 -38.06%
BenchmarkMutexWork-8 177 162 -8.47%
BenchmarkMutexWork-16 170 173 +1.76%
BenchmarkMutexWork-32 176 176 +0.00%
BenchmarkMutexWorkSlack 160 160 +0.00%
BenchmarkMutexWorkSlack-2 103 99.1 -3.79%
BenchmarkMutexWorkSlack-4 155 148 -4.52%
BenchmarkMutexWorkSlack-8 176 170 -3.41%
BenchmarkMutexWorkSlack-16 170 173 +1.76%
BenchmarkMutexWorkSlack-32 175 176 +0.57%
"No work" benchmarks are not very interesting (BenchmarkMutex and
BenchmarkMutexSlack), as they are absolutely not realistic.
Fixes#8889
Change-Id: I6f14f42af1fa48f73a776fdd11f0af6dd2bb428b
Reviewed-on: https://go-review.googlesource.com/5430
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
The change 2096 removed unwanted allocations and a few noises in test
using AllocsPerRun. Now it's safe to enable this canary test on netpoll
hotpaths.
Change-Id: Icdbee813d81c1410a48ea9960d46447042976905
Reviewed-on: https://go-review.googlesource.com/5713
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
This check is expensive and adversely impacts startup times for some
servers with several, large RSA keys.
It was nice to have, but it's not really going to stop a targetted
attack and was never designed to – hopefully people's private keys
aren't attacker controlled!
Overall I think the feeling is that people would rather have the CPU
time back.
Fixes#6626.
Change-Id: I0143a58c9f22381116d4ca2a3bbba0d28575f3e5
Reviewed-on: https://go-review.googlesource.com/5641
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Adam Langley <agl@golang.org>
This change deletes the C implementations of
the Go compiler and assembler from the master branch.
The Go implementations are a bit slower right now,
due mainly to garbage generated by taking addresses
of stack variables all over the place (it was C code,
after all). That will be cleaned up (mechanically) over the
next week or so, and things will get faster.
Change-Id: I66b2b3477aec8835f9960d0798f5752dcd98d08f
The slow path of heapBitsForObjects somewhat subtly assumes that the
pointer will not point to the first word of the object and will round
the pointer wrong if this assumption is violated. This assumption is
safe because the fast path should always take care of this case, but
there's no benefit to making this assumption, it makes the code more
difficult to experiment with than necessary, and it's trivial to
eliminate.
Change-Id: Iedd336f7d529a27d3abeb83e77dfb32a285ea73a
Reviewed-on: https://go-review.googlesource.com/5636
Reviewed-by: Russ Cox <rsc@golang.org>
Ran rsc.io/grind rev 6f0e601 on the source files.
The cleanups move var declarations as close to the use
as possible, splitting disjoint uses of the var into separate
variables. They also remove dead code (especially in
func sudoaddable), which helps with the var moving.
There's more cleanup to come, but this alone cuts the
time spent compiling html/template on my 2013 MacBook Pro
from 3.1 seconds to 2.3 seconds.
Change-Id: I4de499f47b1dd47a560c310bbcde6b08d425cfd6
Reviewed-on: https://go-review.googlesource.com/5637
Reviewed-by: Rob Pike <r@golang.org>