As these tests were originally in bash, they are not designed to be
particularly hermetic. This CL adds various protective mechanisms to
try to catch cases where the tests can not run in parallel.
Change-Id: I983bf7b6ffba04eda58b4939eb89b0bdfcda8eff
Reviewed-on: https://go-review.googlesource.com/10911
Reviewed-by: Andrew Gerrand <adg@golang.org>
Examine the mtime of an existing file to guess a length of time to
sleep to ensure a different mtime.
Change-Id: I9e8b5c9486f5c3c8bd63125e3ed4763ce1ba767d
Reviewed-on: https://go-review.googlesource.com/10932
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This avoids a race with gcmarkwb_m that was leading to faults.
Fixes#10212.
Change-Id: I6fcf8d09f2692227063ce29152cb57366ea22487
Reviewed-on: https://go-review.googlesource.com/10816
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Forgot this one in my previous commit.
Change-Id: Ief089e99bdad24b3bcfb075497dc259d06cc727c
Reviewed-on: https://go-review.googlesource.com/10913
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Might get the Android build passing, or at least going further.
Change-Id: I08f97156a687abe5a3d95203922f4ffd84fbb212
Reviewed-on: https://go-review.googlesource.com/10924
Reviewed-by: David Crawshaw <crawshaw@golang.org>
These were found by grepping the comments from the go code and feeding
the output to aspell.
Change-Id: Id734d6c8d1938ec3c36bd94a4dbbad577e3ad395
Reviewed-on: https://go-review.googlesource.com/10941
Reviewed-by: Aamir Khan <syst3m.w0rm@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This issue was fixed in CL 10900.
Change-Id: I88f107cb73c8a515f39e02506ddd2ad1e286b1fb
Reviewed-on: https://go-review.googlesource.com/10940
Run-TryBot: David du Colombier <0intro@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
When the Stat or Fstat system calls return -1,
dirstat incorrectly returns ErrShortStat.
However, the error returned by Stat or Fstat
could be different. For example, when the
file doesn't exist, they return "does not exist".
Dirstat should return the error returned by
the system call.
Fixes#10911.
Fixes#11132.
Change-Id: Icf242d203d256f12366b1e277f99b1458385104a
Reviewed-on: https://go-review.googlesource.com/10900
Run-TryBot: David du Colombier <0intro@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Process.handle was accessed without synchronization while wait() and
signal() could be called concurrently.
A first solution was to add a Mutex in Process but it was probably too
invasive given Process.handle is only used on Windows.
This version uses atomic operations to read the handle value. There is
still a race between isDone() and the value of the handle, but it only
leads to slightly incorrect error codes. The caller may get a:
errors.New("os: process already finished")
instead of:
syscall.EINVAL
which sounds harmless.
Fixes#9382
Change-Id: Iefcc687a1166d5961c8f27154647b9b15a0f748a
Reviewed-on: https://go-review.googlesource.com/9904
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This was a refactoring bug during
'go tool compile', CL 10289.
Change-Id: Ibfd333be39ec72bba331fdf352df619cc21851a9
Reviewed-on: https://go-review.googlesource.com/10849
Reviewed-by: Minux Ma <minux@golang.org>
GCM is traditionally used with a 96-bit nonce, but the standard allows
for nonces of any size. Non-standard nonce sizes are required in some
protocols, so add support for them in crypto/cipher's GCM
implementation.
Change-Id: I7feca7e903eeba557dcce370412b6ffabf1207ab
Reviewed-on: https://go-review.googlesource.com/8946
Reviewed-by: Adam Langley <agl@golang.org>
Run-TryBot: Adam Langley <agl@golang.org>
Reflect the process changes where AUTHORS and CONTRIBUTORS
files are updated automatically based on commit logs
and Google committers no longer need to do it manually
on the first contributors.
The documentation update will help to avoid requests to be
added from new contributors.
Change-Id: I67daae5bd21246cf79fe3724838889b929bc5e66
Reviewed-on: https://go-review.googlesource.com/10824
Reviewed-by: Rob Pike <r@golang.org>
Bool codegen was generating a temp for function calls
and other complex expressions, but was not using it.
This was a refactoring bug introduced by CL 7853.
The cmp code used to do (in short):
l, r := &n1, &n2
It was changed to:
l, r := nl, nr
But the requisite assignments:
nl, nr = &n1, &n2
were only introduced on one of two code paths.
Fixes#10654.
Change-Id: Ie8de0b3a333842a048d4308e02911bb10c6915ce
Reviewed-on: https://go-review.googlesource.com/10844
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Previously we enforced both that the extended key usages of a client
certificate chain allowed for client authentication, and that the
client-auth EKU was in the leaf certificate.
This change removes the latter requirement. It's still the case that the
chain must be compatible with the client-auth EKU (i.e. that a parent
certificate isn't limited to another usage, like S/MIME), but we'll now
accept a leaf certificate with no EKUs for client-auth.
While it would be nice if all client certificates were explicit in their
intended purpose, I no longer feel that this battle is worthwhile.
Fixes#11087.
Change-Id: I777e695101cbeba069b730163533e2977f4dc1fc
Reviewed-on: https://go-review.googlesource.com/10806
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: Adam Langley <agl@golang.org>
After a little build coordinator change, this will get us sharding of
the race builder.
Update #11074
Change-Id: I4c55267563b6f5e213def7dd6707c837ae2106bf
Reviewed-on: https://go-review.googlesource.com/10845
Reviewed-by: Andrew Gerrand <adg@golang.org>
Change-Id: If11621985c0a5a1f2133cdc974f37fd944b93e5e
Reviewed-on: https://go-review.googlesource.com/10808
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The documentation for quick.Value says that it "returns an arbitrary
value of the given type." In spite of this, nil values for pointers were
never generated, which seems more like an oversight than an intentional
choice.
The lack of nil values meant that testing recursive type like
type Node struct {
Next *Node
}
with testing/quick would lead to a stack overflow since the data
structure would never terminate.
This change may break tests that don't check for nil with pointers
returned from quick.Value. Two such instances were found in the standard
library, one of which was in the testing/quick package itself.
Fixes#8818.
Change-Id: Id390dcce649d12fbbaa801ce6f58f5defed77e60
Reviewed-on: https://go-review.googlesource.com/10821
Reviewed-by: Adam Langley <agl@golang.org>
Run-TryBot: Adam Langley <agl@golang.org>
- remove TODO on non-existing fmt.Formatter type
(type exists now)
- guard uses of imported types against nil
Change-Id: I9ae8e5a448e73c84dec1606ea9d9ed5ddeee8dc6
Reviewed-on: https://go-review.googlesource.com/10777
Reviewed-by: Alan Donovan <adonovan@google.com>
Add .exe to exectable name, so it can be executed on windows.
Use proper windows paths when searching vet output.
Replace Skip with Skipf.
Fixes build
Change-Id: Ife40d8f5ab9d7093ca61c50683a358d4d6a3ba34
Reviewed-on: https://go-review.googlesource.com/10742
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
Reviewed-by: Patrick Mézard <patrick@mezard.eu>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Commit 1303957 was supposed to enable write barriers during the
concurrent scan phase, but it only enabled *calls* to the write
barrier during this phase. It failed to update the redundant list of
write-barrier-enabled phases in gcmarkwb_m, so it still wasn't greying
objects during the scan phase.
This commit fixes this by replacing the redundant list of phases in
gcmarkwb_m with simply checking writeBarrierEnabled. This is almost
certainly redundant with checks already done in callers, but the last
time we tried to remove these redundant checks everything got much
slower, so I'm leaving it alone for now.
Fixes#11105.
Change-Id: I00230a3cb80a008e749553a8ae901b409097e4be
Reviewed-on: https://go-review.googlesource.com/10801
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Minux Ma <minux@golang.org>
Stack barriers assume that writes through pointers to frames above the
current frame will get write barriers, and hence these frames do not
need to be re-scanned to pick up these changes. For normal writes,
this is true. However, there are places in the runtime that use
typedmemmove to potentially write through pointers to higher frames
(such as mapassign1). Currently, typedmemmove does not execute write
barriers if the destination is on the stack. If there's a stack
barrier between the current frame and the frame being modified with
typedmemmove, and the stack barrier is not otherwise hit, it's
possible that the garbage collector will never see the updated pointer
and incorrectly reclaim the object.
Fix this by making heapBitsBulkBarrier (which lies behind typedmemmove
and its variants) detect when the destination is in the stack and
unwind stack barriers up to the point, forcing mark termination to
later rescan the effected frame and collect these pointers.
Fixes#11084. Might be related to #10240, #10541, #10941, #11023,
#11027 and possibly others.
Change-Id: I323d6cd0f1d29fa01f8fc946f4b90e04ef210efd
Reviewed-on: https://go-review.googlesource.com/10791
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, write barriers are only enabled after completion of the
concurrent scan phase, as we enter the concurrent mark phase. However,
stack barriers are installed during the scan phase and assume that
write barriers will track changes to frames above the stack
barriers. Since write barriers aren't enabled until after stack
barriers are installed, we may miss modifications to the stack that
happen after installing the stack barriers and before enabling write
barriers.
Fix this by enabling write barriers during the scan phase.
This commit intentionally makes the minimal change to do this (there's
only one line of code change; the rest are comment changes). At the
very least, we should consider eliminating the ragged barrier that's
intended to synchronize the enabling of write barriers, but now just
wastes time. I've included a large comment about extensions and
alternative designs.
Change-Id: Ib20fede794e4fcb91ddf36f99bd97344d7f96421
Reviewed-on: https://go-review.googlesource.com/10795
Reviewed-by: Russ Cox <rsc@golang.org>
Currently checkmarks mode fails to rescan stacks because it sees the
leftover state bits indicating that the stacks haven't changed since
the last scan. As a result, it won't detect lost marks caused by
failing to scan stacks correctly during regular garbage collection.
Fix this by marking all stacks dirty before performing the checkmark
phase.
Change-Id: I1f06882bb8b20257120a4b8e7f95bb3ffc263895
Reviewed-on: https://go-review.googlesource.com/10794
Reviewed-by: Russ Cox <rsc@golang.org>
obj.ARET is the portable return mnemonic. ppc64.ARETURN is a legacy
alias.
This was done with
sed -i s/ppc64\.ARETURN/obj.ARET/ cmd/compile/**/*.go
sed -i s/ARETURN/obj.ARET/ cmd/internal/obj/ppc64/obj9.go
Change-Id: I4d8e83ff411cee764774a40ef4c7c34dcbca4e43
Reviewed-on: https://go-review.googlesource.com/10673
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
All of the architectures except ppc64 have only "RET" for the return
mnemonic. ppc64 used to have only "RETURN", but commit cf06ea6
introduced RET as a synonym for RETURN to make ppc64 consistent with
the other architectures. However, that commit was never followed up to
make the code itself consistent by eliminating uses of RETURN.
This commit replaces all uses of RETURN in the ppc64 assembly with
RET.
This was done with
sed -i 's/\<RETURN\>/RET/' **/*_ppc64x.s
plus one manual change to syscall/asm.s.
Change-Id: I3f6c8d2be157df8841d48de988ee43f3e3087995
Reviewed-on: https://go-review.googlesource.com/10672
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
gc should ideally consider this an error too; see golang/go#8560.
Change-Id: Ieee71c4ecaff493d7f83e15ba8c8a04ee90a4cf1
Reviewed-on: https://go-review.googlesource.com/10757
Reviewed-by: Robert Griesemer <gri@golang.org>
In x/tools, MethodSetCache was moved from x/tools/go/types to
x/tools/go/types/typeutil. Mirror that change.
Change-Id: Ib838a9518371473c83fa4abc2778d42f33947c98
Reviewed-on: https://go-review.googlesource.com/10771
Reviewed-by: Alan Donovan <adonovan@google.com>
Currently the stack barriers are installed at the next frame boundary
after gp.sched.sp + 1024*2^n for n=0,1,2,... However, when a G is in a
system call, we set gp.sched.sp to 0, which causes stack barriers to
be installed at *every* frame. This easily overflows the slice we've
reserved for storing the stack barrier information, and causes a
"slice bounds out of range" panic in gcInstallStackBarrier.
Fix this by using gp.syscallsp instead of gp.sched.sp if it's
non-zero. This is the same logic that gentraceback uses to determine
the current SP.
Fixes#11049.
Change-Id: Ie40eeee5bec59b7c1aa715a7c17aa63b1f1cf4e8
Reviewed-on: https://go-review.googlesource.com/10755
Reviewed-by: Russ Cox <rsc@golang.org>