Robert Griesemer, Rob Pike and Ken Thompson started sketching the goals for a new language on the white board on September 21, 2007. Within a few days the goals had settled into a plan to do something and a fair idea of what it would be. Design continued part-time in parallel with unrelated activities. By January 2008, Ken started work on a compiler with which to explore ideas; it generated C code as its output. By mid-year the language had become a full-time project and had settled enough to attempt a production compiler. Meanwhile, Ian Taylor had read the draft specification and written an independent GCC front end.
In the last few months of 2008, Russ Cox joined the team and Go had reached the point where it was usable as the main programming language for the team's own work.
Go was born out of frustration with existing languages and environments for systems programming. Programming had become too difficult and the choice of languages was partly to blame. One had to choose either efficient compilation, efficient execution, or ease of programming; all three were not available in the same commonly available language. Programmers who could were choosing ease over safety and efficiency by moving to dynamic languages such as Python and JavaScript rather than C++ or, to a lesser extent, Java.
Go is an attempt to combine the ease of programming of a dynamic language with the efficiency and type safety of a compiled language. It also aims to be modern, with support for networked and multicore computing. Finally, it is intended to be fast: it should take at most a few seconds to build a large executable on a single computer. To meet these goals required addressing a number of linguistic issues: an expressive but lightweight type system; concurrency and garbage collection; rigid dependency specification; and so on. These cannot be addressed well by libraries or tools; a new language was called for.
Go is mostly in the C family (basic syntax), with significant input from the Pascal/Modula/Oberon family (declarations, packages), plus it borrows some ideas from languages inspired by Tony Hoare's CSP, such as Newsqueak and Limbo (concurrency). However, it is a new language across the board. In every respect the language was designed by thinking about what programmers do and how to make programming, at least the kind of programming we do, more effective, which means more fun.
Robert Griesemer, Rob Pike and Ken Thompson laid out the goals and
original specification of the language. Ian Taylor read the draft
specification and decided to write gccgo
. Russ
Cox joined later and helped move the language and libraries from
prototype to reality.
Programming today involves too much bookkeeping, repetition, and clerical work. As Dick Gabriel says, “Old programs read like quiet conversations between a well-spoken research worker and a well-studied mechanical colleague, not as a debate with a compiler. Who'd have guessed sophistication bought such noise?” The sophistication is worthwhile—no one wants to go back to the old languages—but can it be more quietly achieved?
Go attempts to reduce the amount of typing in both senses of the word.
Throughout its design, we have tried to reduce the clutter and
complexity. There are no forward declarations and no header files;
everything is declared exactly once. Initialization is expressive,
automatic, and easy to use. Syntax is clean and light on keywords.
Stuttering (foo.Foo* myFoo = new(foo.Foo)
) is reduced by
simple type derivation using the :=
declare-and-initialize construct. And perhaps most radically, there
is no type hierarchy: types just are, they don't have to
announce their relationships. These simplifications allow Go to be
expressive yet comprehensible without sacrificing, well, sophistication.
Other than declaration syntax, the differences are not major and stem from two desires. First, the syntax should feel light, without too many mandatory keywords, repetition, or arcana. Second, the language has been designed to be easy to parse. The grammar is conflict-free and can be parsed without a symbol table. This makes it much easier to build tools such as debuggers, dependency analyzers, automated documentation extractors, IDE plug-ins, and so on. C and its descendants are notoriously difficult in this regard but it's not hard to fix things up.
They're only backwards if you're used to C. In C, the notion is that a
variable is declared like an expression denoting its type, which is a
nice idea, but the type and expression grammars don't mix very well and
the results can be confusing; consider function pointers. Go mostly
separates expression and type syntax and that simplifies things (using
prefix *
for pointers is an exception that proves the rule). In C,
the declaration
int* a, b;
declares a
to be a pointer but not b
; in Go
var a, b *int;
declares both to be pointers. This is clearer and more regular.
Also, the :=
short declaration form argues that a full variable
declaration should present the same order as :=
so
var a uint64 = 1;has the same effect as
a := uint64(1);
Parsing is also simplified by having a distinct grammar for types that
is not just the expression grammar; keywords such as func
and chan
keep things clear.
Safety. Without pointer arithmetic it's possible to create a language that can never derive an illegal address that succeeds incorrectly. Compiler and hardware technology have advanced to the point where a loop using array indices can be as efficient as a loop using pointer arithmetic. Also, the lack of pointer arithmetic can simplify the implementation of the garbage collector.
++
and --
statements and not expressions? And why postfix, not prefix?
Without pointer arithmetic, the convenience value of pre- and postfix
increment operators drops. By removing them from the expression
hierarchy altogether, expression syntax is simplified and the messy
issues around order of evaluation of ++
and --
(consider f(i++)
and p[i] = q[++i]
)
are eliminated as well. The simplification is
significant. As for postfix vs. prefix, either would work fine but
the postfix version is more traditional; insistence on prefix arose
with the STL, part of a language whose name contains, ironically, a
postfix increment.
Generics may well come at some point. We don't feel an urgency for them, although we understand some programmers do.
Generics are convenient but they come at a cost in complexity in the type system and run-time. We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it. Meanwhile, Go's built-in maps and slices, plus the ability to use the empty interface to construct containers (with explicit unboxing) mean in many cases it is possible to write code that does what generics would enable, if less smoothly.
This remains an open issue.
Exceptions are a similar story. A number of designs for exceptions have been proposed but each adds significant complexity to the language and run-time. By their very nature, exceptions span functions and perhaps even goroutines; they have wide-ranging implications. There is also concern about the effect they would have on the libraries. They are, by definition, exceptional yet experience with other languages that support them show they have profound effect on library and interface specification. It would be nice to find a design that allows them to be truly exceptional without encouraging common errors to turn into special control flow requiring every programmer to compensate.
Like generics, exceptions remain an open issue.
This is answered in the general FAQ.
Object-oriented programming, at least in the languages we've used, involves too much discussion of the relationships between types, relationships that often could be derived automatically. Go takes a different approach that we're still learning about but that feels useful and powerful.
Rather than requiring the programmer to declare ahead of time that two types are related, in Go a type automatically satisfies any interface that specifies a subset of its methods. Besides reducing the bookkeeping, this approach has real advantages. Types can satisfy many interfaces at once, without the complexities of traditional multiple inheritance. Interfaces can be very lightweight—one or even zero methods in an interface can express useful concepts. Interfaces can be added after the fact if a new idea comes along or for testing—without annotating the original type. Because there are no explicit relationships between types and interfaces, there is no type hierarchy to manage.
It's possible to use these ideas to construct something analogous to
type-safe Unix pipes. For instance, see how fmt.Fprintf
enables formatted printing to any output, not just a file, or how the
bufio
package can be completely separate from file I/O,
or how the crypto
packages stitch together block and
stream ciphers. All these ideas stem from a single interface
(io.Writer
) representing a single method
(Write
). We've only scratched the surface.
It takes some getting used to but this implicit style of type dependency is one of the most exciting things about Go.
len
a function and not a method?
To be blunt, Go isn't that kind of language. We debated this issue but decided
implementing len
and friends as functions was fine in practice and
didn't complicate questions about the interface (in the Go type sense)
of basic types. The issue didn't seem important enough to resolve that way.
Method dispatch is simplified if it doesn't need to do type matching as well. Experience with other languages told us that having a variety of methods with the same name but different signatures was occasionally useful but that it could also be confusing and fragile in practice. Matching only by name and requiring consistency in the types was a major simplifying decision in Go's type system.
Regarding operator overloading, it seems more a convenience than an absolute requirement. Again, things are simpler without it.
The same reason strings are: they are such a powerful and important data structure that providing one excellent implementation with syntactic support makes programming more pleasant. We believe that Go's implementation of maps is strong enough that it will serve for the vast majority of uses. If a specific application can benefit from a custom implementation, it's possible to write one but it will not be as convenient to use; this seems a reasonable tradeoff.
Map lookup requires an equality operator, which structs and arrays do not implement. They don't implement equality because equality is not well defined on such types; there are multiple considerations involving shallow vs. deep comparison, pointer vs. value comparison, how to deal with recursive structures, and so on. We may revisit this issue—and implementing equality for structs and arrays will not invalidate any existing programs—but without a clear idea of what equality of structs and arrays should mean, it was simpler to leave it out for now.
After long discussion it was decided that the typical use of maps did not require safe access from multiple threads, and in those cases where it did, the map was probably part of some larger data structure or computation that was already synchronized. Therefore requiring that all map operations grab a mutex would slow down most programs and add safety to few. This was not an easy decision, however, since it means uncontrolled map access can crash the program.
The language does not preclude atomic map updates. When required, such as when hosting an untrusted program, the implementation could interlock map access.
TODO:
explain: package design slices oo separate from storage (abstraction vs. implementation) why garbage collection? inheritance? embedding? dependency declarations in the language oo questions no data in interfaces dynamic dispatch clean separation of interface and implementation why no automatic numeric conversions? make vs new