The root block was used as a sentinel. This means we always need to
allocate a second block on the heap, even if the set has a few small
elements.
We now use the root block: it is always the block with the smallest
offset. The logic becomes very messy if there is no sentinel; to avoid
this we still use a sentinel (a special singleton block) and return it
in when appropriate in the first, last, next wrappers.
Also adding some benchmarks and making some optimizations:
name old time/op new time/op delta
Popcount-4 2.18ns ± 1% 2.21ns ± 1% +1.47%
InsertProbeSparse_2_10-4 76.2ns ±23% 37.2ns ± 1% -51.21%
InsertProbeSparse_10_10-4 240ns ±15% 162ns ± 4% -32.58%
InsertProbeSparse_10_1000-4 419ns ± 4% 371ns ±19% -11.43%
InsertProbeSparse_100_100-4 2.30µs ± 1% 1.93µs ± 1% -16.08%
InsertProbeSparse_100_10000-4 2.12µs ± 3% 2.07µs ± 1% -2.11%
UnionDifferenceSparse-4 165µs ±16% 170µs ± 9% ~
UnionDifferenceHashTable-4 310µs ±10% 291µs ±17% ~
AppendTo-4 11.0µs ± 0% 11.0µs ± 0% -0.35%
name old alloc/op new alloc/op delta
Popcount-4 0.00B ±NaN% 0.00B ±NaN% ~
InsertProbeSparse_2_10-4 64.0B ± 0% 0.0B ±NaN% -100.00%
InsertProbeSparse_10_10-4 64.0B ± 0% 0.0B ±NaN% -100.00%
InsertProbeSparse_10_1000-4 256B ± 0% 192B ± 0% -25.00%
InsertProbeSparse_100_100-4 64.0B ± 0% 0.0B ±NaN% -100.00%
InsertProbeSparse_100_10000-4 256B ± 0% 192B ± 0% -25.00%
UnionDifferenceSparse-4 59.4kB ± 0% 59.2kB ± 0% -0.32%
UnionDifferenceHashTable-4 138kB ± 0% 138kB ± 0% ~
AppendTo-4 0.00B ±NaN% 0.00B ±NaN% ~
name old allocs/op new allocs/op delta
Popcount-4 0.00 ±NaN% 0.00 ±NaN% ~
InsertProbeSparse_2_10-4 1.00 ± 0% 0.00 ±NaN% -100.00%
InsertProbeSparse_10_10-4 1.00 ± 0% 0.00 ±NaN% -100.00%
InsertProbeSparse_10_1000-4 4.00 ± 0% 3.00 ± 0% -25.00%
InsertProbeSparse_100_100-4 1.00 ± 0% 0.00 ±NaN% -100.00%
InsertProbeSparse_100_10000-4 4.00 ± 0% 3.00 ± 0% -25.00%
UnionDifferenceSparse-4 928 ± 0% 925 ± 0% -0.32%
UnionDifferenceHashTable-4 271 ± 0% 271 ± 0% ~
AppendTo-4 0.00 ±NaN% 0.00 ±NaN% ~
Fixesgolang/go#21311.
Change-Id: Ie472a2afa269c21cb33b22ffdac8dd2594b816ac
Reviewed-on: https://go-review.googlesource.com/53431
Reviewed-by: Alan Donovan <adonovan@google.com>
Just reading through intsets and decided to knock out a few TODOs.
Change-Id: I677dbcc5ff934fbe0f0af09a4741e708a893f8db
Reviewed-on: https://go-review.googlesource.com/2733
Reviewed-by: Alan Donovan <adonovan@google.com>
Fixes various problems reported by go vet.
Change-Id: I12a6fdba8f911b21805d8e42903f8f6a5033790a
Reviewed-on: https://go-review.googlesource.com/2163
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Rewrite performed with this command:
sed -i '' 's_code.google.com/p/go\._golang.org/x/_g' \
$(grep -lr 'code.google.com/p/go.' *)
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/170920043
Also:
- increase sparsity of sets in benchmarks.
- removed TODO in forEach. Subword masks had no benefit.
- minor cleanup.
LGTM=crawshaw
R=crawshaw
CC=golang-codereviews
https://golang.org/cl/103470049
(I forgot about this when we added support for negative elements generally.)
We use floating point for negative numbers. The order of the
output is reversed from the previous (little-endian) behaviour
since it makes for more readable floating point.
LGTM=gri
R=gri
CC=golang-codereviews
https://golang.org/cl/95570043
This is both easier to read and 25% shorter (helpful when
using String() as a map key for interning sets).
LGTM=gri
R=gri
CC=golang-codereviews
https://golang.org/cl/96370045
intsets.Sparse is a sparse bit vector. It uses space proportional
to the number of elements, not the maximum element (as is the case for a dense bit vector).
A forthcoming CL will make use of it in go/pointer, where it reduces
solve time by 78%. A similar representation is used for Andersen's
analysis in gcc and LLVM.
+ Tests.
LGTM=sameer, crawshaw, gri
R=gri
CC=crawshaw, golang-codereviews, sameer
https://golang.org/cl/10837043