- remove use of implicit string concatenation
- these appear to be the only files correctly compiling under test
that used implicit string concatenation
R=rsc
https://golang.org/cl/172043
and release.2009-12-09 (where we are)
shorten tags because it is too hard to look up the
full size hash and overkill anyway.
R=r
https://golang.org/cl/171047
UTF-8 string, Yconv() converts it into an octal sequence. If the
string converted to more than 30 bytes, the str buffer would
overflow. For example, 4 Greek runes became 32 bytes, 3 Hiragana
runes became 36 bytes, and 2 Gothic runes became 32 bytes. In
8l, 6l and 5l the function is Sconv(). For some reason, only 5l uses
the constant STRINGSZ (defined as 200) for the buffer size.
R=rsc
https://golang.org/cl/168045
FreeBSD was passing stk as the new thread's stack base, while
stk is the top of the stack in go. The added check should cause
a trap if this ever comes up in any new ports, or regresses
in current ones.
R=rsc
CC=golang-dev
https://golang.org/cl/167055
nodes in the tree are nested with respect to one another.
a simple change to the Visitor interface makes it possible
to do this (for example to maintain a current node-depth, or a
knowledge of the name of the current function).
Visit(nil) is called at the end of a node's children;
this make possible the channel-based interface below,
amongst other possibilities.
It is still just as simple to get the original behaviour - just
return the same Visitor from Visit.
Here are a couple of possible Visitor types.
// closure-based
type FVisitor func(n interface{}) FVisitor
func (f FVisitor) Visit(n interface{}) Visitor {
return f(n);
}
// channel-based
type CVisitor chan Visit;
type Visit struct {
node interface{};
reply chan CVisitor;
};
func (v CVisitor) Visit(n interface{}) Visitor
{
if n == nil {
close(v);
} else {
reply := make(chan CVisitor);
v <- Visit{n, reply};
r := <-reply;
if r == nil {
return nil;
}
return r;
}
return nil;
}
R=gri
CC=rsc
https://golang.org/cl/166047
Roughly 33% faster for simple cases, probably more for complex ones.
Before:
mallocs per Sprintf(""): 4
mallocs per Sprintf("xxx"): 6
mallocs per Sprintf("%x"): 10
mallocs per Sprintf("%x %x"): 12
Now:
mallocs per Sprintf(""): 2
mallocs per Sprintf("xxx"): 3
mallocs per Sprintf("%x"): 5
mallocs per Sprintf("%x %x"): 7
Speed improves because of avoiding mallocs and also by sharing a bytes.Buffer
between print.go and format.go rather than copying the data back after each
printed item.
Before:
fmt_test.BenchmarkSprintfEmpty 1000000 1346 ns/op
fmt_test.BenchmarkSprintfString 500000 3461 ns/op
fmt_test.BenchmarkSprintfInt 500000 3671 ns/op
Now:
fmt_test.BenchmarkSprintfEmpty 2000000 995 ns/op
fmt_test.BenchmarkSprintfString 1000000 2745 ns/op
fmt_test.BenchmarkSprintfInt 1000000 2391 ns/op
fmt_test.BenchmarkSprintfIntInt 500000 3751 ns/op
I believe there is more to get but this is a good milestone.
R=rsc
CC=golang-dev, hong
https://golang.org/cl/166076
For 386 we use the [f]statfs64 system call, which takes three
parameters: the filename, the size of the statfs64 structure,
and a pointer to the structure itself.
R=rsc
https://golang.org/cl/166073