1
0
mirror of https://github.com/golang/go synced 2024-11-06 08:16:11 -07:00
go/internal/telemetry/export/ocagent
Nathan Dias 268ba720d3 internal/telemetry/export/ocagent: update metric tutorial to use oragent
This change updates the metric exporting tutorial to use oragent to spin
up OpenCensus, Prometheus, and Zipkin all at once using docker-compose
rather than manually setting each one up. This allows developers to set
up an environment for testing metrics and traces with minimal effort.

While oragent also spins up Zipkin for traces, the tutorial does not yet
have a section outlining how to export traces from Go tools. A section
for traces will added in a later CL.

Change-Id: I07f49977f7ab49995853ff8ee8eb6ccdf6ef1642
Reviewed-on: https://go-review.googlesource.com/c/tools/+/224258
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
2020-03-21 01:49:04 +00:00
..
wire all: fix some staticcheck errors 2020-01-29 04:53:41 +00:00
metrics_test.go internal/telemetry: add type safe tag keys 2020-03-18 13:29:43 +00:00
metrics.go internal/telemetry: add type safe tag keys 2020-03-18 13:29:43 +00:00
ocagent_test.go internal/telemetry: add type safe tag keys 2020-03-18 13:29:43 +00:00
ocagent.go internal/telemetry: add type safe tag keys 2020-03-18 13:29:43 +00:00
README.md internal/telemetry/export/ocagent: update metric tutorial to use oragent 2020-03-21 01:49:04 +00:00
trace_test.go internal/telemetry: change ocagent test to use the standard telemetry methods 2020-03-18 13:22:16 +00:00

Exporting Metrics with OpenCensus and Prometheus

This tutorial provides a minimum example to verify that metrics can be exported to OpenCensus from Go tools.

Setting up oragent

  1. Ensure you have docker and docker-compose.
  2. Clone oragent.
  3. In the oragent directory, start the services:
docker-compose up

If everything goes well, you should see output resembling the following:

Starting oragent_zipkin_1 ... done
Starting oragent_oragent_1 ... done
Starting oragent_prometheus_1 ... done
...
  1. To shut down oragent, hit Ctrl+C in the terminal.
  2. You can also start oragent in detached mode by running docker-compose up -d. To stop oragent while detached, run docker-compose down.

Exporting Metrics

  1. Clone the tools subrepository.
  2. Inside internal, create a file named main.go with the following contents:
package main

import (
	"context"
	"fmt"
	"math/rand"
	"net/http"
	"time"

	"golang.org/x/tools/internal/telemetry/export"
	"golang.org/x/tools/internal/telemetry/export/ocagent"
	"golang.org/x/tools/internal/telemetry/metric"
	"golang.org/x/tools/internal/telemetry/stats"
)

func main() {

	exporter := ocagent.Connect(&ocagent.Config{
		Start:   time.Now(),
		Address: "http://127.0.0.1:55678",
		Service: "go-tools-test",
		Rate:    5 * time.Second,
		Client:  &http.Client{},
	})
	export.SetExporter(exporter)

	ctx := context.TODO()
	mLatency := stats.Float64("latency", "the latency in milliseconds", "ms")
	distribution := metric.HistogramFloat64Data{
		Info: &metric.HistogramFloat64{
			Name:        "latencyDistribution",
			Description: "the various latencies",
			Buckets:     []float64{0, 10, 50, 100, 200, 400, 800, 1000, 1400, 2000, 5000, 10000, 15000},
		},
	}

	distribution.Info.Record(mLatency)

	for {
		sleep := randomSleep()
		time.Sleep(time.Duration(sleep) * time.Millisecond)
		mLatency.Record(ctx, float64(sleep))

		fmt.Println("Latency: ", float64(sleep))
	}
}

func randomSleep() int64 {
	var max int64
	switch modulus := time.Now().Unix() % 5; modulus {
	case 0:
		max = 17001
	case 1:
		max = 8007
	case 2:
		max = 917
	case 3:
		max = 87
	case 4:
		max = 1173
	}
	return rand.Int63n(max)
}

  1. Run the new file from within the tools repository:
go run internal/main.go
  1. After about 5 seconds, OpenCensus should start receiving your new metrics, which you can see at http://localhost:8844/metrics. This page will look similar to the following:
# HELP promdemo_latencyDistribution the various latencies
# TYPE promdemo_latencyDistribution histogram
promdemo_latencyDistribution_bucket{vendor="otc",le="0"} 0
promdemo_latencyDistribution_bucket{vendor="otc",le="10"} 2
promdemo_latencyDistribution_bucket{vendor="otc",le="50"} 9
promdemo_latencyDistribution_bucket{vendor="otc",le="100"} 22
promdemo_latencyDistribution_bucket{vendor="otc",le="200"} 35
promdemo_latencyDistribution_bucket{vendor="otc",le="400"} 49
promdemo_latencyDistribution_bucket{vendor="otc",le="800"} 63
promdemo_latencyDistribution_bucket{vendor="otc",le="1000"} 78
promdemo_latencyDistribution_bucket{vendor="otc",le="1400"} 93
promdemo_latencyDistribution_bucket{vendor="otc",le="2000"} 108
promdemo_latencyDistribution_bucket{vendor="otc",le="5000"} 123
promdemo_latencyDistribution_bucket{vendor="otc",le="10000"} 138
promdemo_latencyDistribution_bucket{vendor="otc",le="15000"} 153
promdemo_latencyDistribution_bucket{vendor="otc",le="+Inf"} 15
promdemo_latencyDistribution_sum{vendor="otc"} 1641
promdemo_latencyDistribution_count{vendor="otc"} 15
  1. After a few more seconds, Prometheus should start displaying your new metrics. You can view the distribution at http://localhost:9445/graph?g0.range_input=5m&g0.stacked=1&g0.expr=rate(oragent_latencyDistribution_bucket%5B5m%5D)&g0.tab=0.