chore: vendor

This commit is contained in:
2024-08-04 11:06:58 +02:00
parent 2a5985e44e
commit 04aec8232f
3557 changed files with 981078 additions and 1 deletions

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,50 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"io"
"net/http"
"net/url"
"strings"
)
// DefaultClient is the default Client and is used by Get, Head, Post and PostForm.
// Please be careful of initialization order - for example, if you change
// the global propagator, the DefaultClient might still be using the old one.
var DefaultClient = &http.Client{Transport: NewTransport(http.DefaultTransport)}
// Get is a convenient replacement for http.Get that adds a span around the request.
func Get(ctx context.Context, targetURL string) (resp *http.Response, err error) {
req, err := http.NewRequestWithContext(ctx, "GET", targetURL, nil)
if err != nil {
return nil, err
}
return DefaultClient.Do(req)
}
// Head is a convenient replacement for http.Head that adds a span around the request.
func Head(ctx context.Context, targetURL string) (resp *http.Response, err error) {
req, err := http.NewRequestWithContext(ctx, "HEAD", targetURL, nil)
if err != nil {
return nil, err
}
return DefaultClient.Do(req)
}
// Post is a convenient replacement for http.Post that adds a span around the request.
func Post(ctx context.Context, targetURL, contentType string, body io.Reader) (resp *http.Response, err error) {
req, err := http.NewRequestWithContext(ctx, "POST", targetURL, body)
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", contentType)
return DefaultClient.Do(req)
}
// PostForm is a convenient replacement for http.PostForm that adds a span around the request.
func PostForm(ctx context.Context, targetURL string, data url.Values) (resp *http.Response, err error) {
return Post(ctx, targetURL, "application/x-www-form-urlencoded", strings.NewReader(data.Encode()))
}

View File

@ -0,0 +1,41 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"net/http"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
// Attribute keys that can be added to a span.
const (
ReadBytesKey = attribute.Key("http.read_bytes") // if anything was read from the request body, the total number of bytes read
ReadErrorKey = attribute.Key("http.read_error") // If an error occurred while reading a request, the string of the error (io.EOF is not recorded)
WroteBytesKey = attribute.Key("http.wrote_bytes") // if anything was written to the response writer, the total number of bytes written
WriteErrorKey = attribute.Key("http.write_error") // if an error occurred while writing a reply, the string of the error (io.EOF is not recorded)
)
// Server HTTP metrics.
const (
serverRequestSize = "http.server.request.size" // Incoming request bytes total
serverResponseSize = "http.server.response.size" // Incoming response bytes total
serverDuration = "http.server.duration" // Incoming end to end duration, milliseconds
)
// Client HTTP metrics.
const (
clientRequestSize = "http.client.request.size" // Outgoing request bytes total
clientResponseSize = "http.client.response.size" // Outgoing response bytes total
clientDuration = "http.client.duration" // Outgoing end to end duration, milliseconds
)
// Filter is a predicate used to determine whether a given http.request should
// be traced. A Filter must return true if the request should be traced.
type Filter func(*http.Request) bool
func newTracer(tp trace.TracerProvider) trace.Tracer {
return tp.Tracer(ScopeName, trace.WithInstrumentationVersion(Version()))
}

View File

@ -0,0 +1,196 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"net/http"
"net/http/httptrace"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/trace"
)
// ScopeName is the instrumentation scope name.
const ScopeName = "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
// config represents the configuration options available for the http.Handler
// and http.Transport types.
type config struct {
ServerName string
Tracer trace.Tracer
Meter metric.Meter
Propagators propagation.TextMapPropagator
SpanStartOptions []trace.SpanStartOption
PublicEndpoint bool
PublicEndpointFn func(*http.Request) bool
ReadEvent bool
WriteEvent bool
Filters []Filter
SpanNameFormatter func(string, *http.Request) string
ClientTrace func(context.Context) *httptrace.ClientTrace
TracerProvider trace.TracerProvider
MeterProvider metric.MeterProvider
}
// Option interface used for setting optional config properties.
type Option interface {
apply(*config)
}
type optionFunc func(*config)
func (o optionFunc) apply(c *config) {
o(c)
}
// newConfig creates a new config struct and applies opts to it.
func newConfig(opts ...Option) *config {
c := &config{
Propagators: otel.GetTextMapPropagator(),
MeterProvider: otel.GetMeterProvider(),
}
for _, opt := range opts {
opt.apply(c)
}
// Tracer is only initialized if manually specified. Otherwise, can be passed with the tracing context.
if c.TracerProvider != nil {
c.Tracer = newTracer(c.TracerProvider)
}
c.Meter = c.MeterProvider.Meter(
ScopeName,
metric.WithInstrumentationVersion(Version()),
)
return c
}
// WithTracerProvider specifies a tracer provider to use for creating a tracer.
// If none is specified, the global provider is used.
func WithTracerProvider(provider trace.TracerProvider) Option {
return optionFunc(func(cfg *config) {
if provider != nil {
cfg.TracerProvider = provider
}
})
}
// WithMeterProvider specifies a meter provider to use for creating a meter.
// If none is specified, the global provider is used.
func WithMeterProvider(provider metric.MeterProvider) Option {
return optionFunc(func(cfg *config) {
if provider != nil {
cfg.MeterProvider = provider
}
})
}
// WithPublicEndpoint configures the Handler to link the span with an incoming
// span context. If this option is not provided, then the association is a child
// association instead of a link.
func WithPublicEndpoint() Option {
return optionFunc(func(c *config) {
c.PublicEndpoint = true
})
}
// WithPublicEndpointFn runs with every request, and allows conditionally
// configuring the Handler to link the span with an incoming span context. If
// this option is not provided or returns false, then the association is a
// child association instead of a link.
// Note: WithPublicEndpoint takes precedence over WithPublicEndpointFn.
func WithPublicEndpointFn(fn func(*http.Request) bool) Option {
return optionFunc(func(c *config) {
c.PublicEndpointFn = fn
})
}
// WithPropagators configures specific propagators. If this
// option isn't specified, then the global TextMapPropagator is used.
func WithPropagators(ps propagation.TextMapPropagator) Option {
return optionFunc(func(c *config) {
if ps != nil {
c.Propagators = ps
}
})
}
// WithSpanOptions configures an additional set of
// trace.SpanOptions, which are applied to each new span.
func WithSpanOptions(opts ...trace.SpanStartOption) Option {
return optionFunc(func(c *config) {
c.SpanStartOptions = append(c.SpanStartOptions, opts...)
})
}
// WithFilter adds a filter to the list of filters used by the handler.
// If any filter indicates to exclude a request then the request will not be
// traced. All filters must allow a request to be traced for a Span to be created.
// If no filters are provided then all requests are traced.
// Filters will be invoked for each processed request, it is advised to make them
// simple and fast.
func WithFilter(f Filter) Option {
return optionFunc(func(c *config) {
c.Filters = append(c.Filters, f)
})
}
type event int
// Different types of events that can be recorded, see WithMessageEvents.
const (
ReadEvents event = iota
WriteEvents
)
// WithMessageEvents configures the Handler to record the specified events
// (span.AddEvent) on spans. By default only summary attributes are added at the
// end of the request.
//
// Valid events are:
// - ReadEvents: Record the number of bytes read after every http.Request.Body.Read
// using the ReadBytesKey
// - WriteEvents: Record the number of bytes written after every http.ResponeWriter.Write
// using the WriteBytesKey
func WithMessageEvents(events ...event) Option {
return optionFunc(func(c *config) {
for _, e := range events {
switch e {
case ReadEvents:
c.ReadEvent = true
case WriteEvents:
c.WriteEvent = true
}
}
})
}
// WithSpanNameFormatter takes a function that will be called on every
// request and the returned string will become the Span Name.
func WithSpanNameFormatter(f func(operation string, r *http.Request) string) Option {
return optionFunc(func(c *config) {
c.SpanNameFormatter = f
})
}
// WithClientTrace takes a function that returns client trace instance that will be
// applied to the requests sent through the otelhttp Transport.
func WithClientTrace(f func(context.Context) *httptrace.ClientTrace) Option {
return optionFunc(func(c *config) {
c.ClientTrace = f
})
}
// WithServerName returns an Option that sets the name of the (virtual) server
// handling requests.
func WithServerName(server string) Option {
return optionFunc(func(c *config) {
c.ServerName = server
})
}

View File

@ -0,0 +1,7 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package otelhttp provides an http.Handler and functions that are intended
// to be used to add tracing by wrapping existing handlers (with Handler) and
// routes WithRouteTag.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"

View File

@ -0,0 +1,258 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"net/http"
"time"
"github.com/felixge/httpsnoop"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/trace"
)
// middleware is an http middleware which wraps the next handler in a span.
type middleware struct {
operation string
server string
tracer trace.Tracer
meter metric.Meter
propagators propagation.TextMapPropagator
spanStartOptions []trace.SpanStartOption
readEvent bool
writeEvent bool
filters []Filter
spanNameFormatter func(string, *http.Request) string
publicEndpoint bool
publicEndpointFn func(*http.Request) bool
traceSemconv semconv.HTTPServer
requestBytesCounter metric.Int64Counter
responseBytesCounter metric.Int64Counter
serverLatencyMeasure metric.Float64Histogram
}
func defaultHandlerFormatter(operation string, _ *http.Request) string {
return operation
}
// NewHandler wraps the passed handler in a span named after the operation and
// enriches it with metrics.
func NewHandler(handler http.Handler, operation string, opts ...Option) http.Handler {
return NewMiddleware(operation, opts...)(handler)
}
// NewMiddleware returns a tracing and metrics instrumentation middleware.
// The handler returned by the middleware wraps a handler
// in a span named after the operation and enriches it with metrics.
func NewMiddleware(operation string, opts ...Option) func(http.Handler) http.Handler {
h := middleware{
operation: operation,
traceSemconv: semconv.NewHTTPServer(),
}
defaultOpts := []Option{
WithSpanOptions(trace.WithSpanKind(trace.SpanKindServer)),
WithSpanNameFormatter(defaultHandlerFormatter),
}
c := newConfig(append(defaultOpts, opts...)...)
h.configure(c)
h.createMeasures()
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
h.serveHTTP(w, r, next)
})
}
}
func (h *middleware) configure(c *config) {
h.tracer = c.Tracer
h.meter = c.Meter
h.propagators = c.Propagators
h.spanStartOptions = c.SpanStartOptions
h.readEvent = c.ReadEvent
h.writeEvent = c.WriteEvent
h.filters = c.Filters
h.spanNameFormatter = c.SpanNameFormatter
h.publicEndpoint = c.PublicEndpoint
h.publicEndpointFn = c.PublicEndpointFn
h.server = c.ServerName
}
func handleErr(err error) {
if err != nil {
otel.Handle(err)
}
}
func (h *middleware) createMeasures() {
var err error
h.requestBytesCounter, err = h.meter.Int64Counter(
serverRequestSize,
metric.WithUnit("By"),
metric.WithDescription("Measures the size of HTTP request messages."),
)
handleErr(err)
h.responseBytesCounter, err = h.meter.Int64Counter(
serverResponseSize,
metric.WithUnit("By"),
metric.WithDescription("Measures the size of HTTP response messages."),
)
handleErr(err)
h.serverLatencyMeasure, err = h.meter.Float64Histogram(
serverDuration,
metric.WithUnit("ms"),
metric.WithDescription("Measures the duration of inbound HTTP requests."),
)
handleErr(err)
}
// serveHTTP sets up tracing and calls the given next http.Handler with the span
// context injected into the request context.
func (h *middleware) serveHTTP(w http.ResponseWriter, r *http.Request, next http.Handler) {
requestStartTime := time.Now()
for _, f := range h.filters {
if !f(r) {
// Simply pass through to the handler if a filter rejects the request
next.ServeHTTP(w, r)
return
}
}
ctx := h.propagators.Extract(r.Context(), propagation.HeaderCarrier(r.Header))
opts := []trace.SpanStartOption{
trace.WithAttributes(h.traceSemconv.RequestTraceAttrs(h.server, r)...),
}
opts = append(opts, h.spanStartOptions...)
if h.publicEndpoint || (h.publicEndpointFn != nil && h.publicEndpointFn(r.WithContext(ctx))) {
opts = append(opts, trace.WithNewRoot())
// Linking incoming span context if any for public endpoint.
if s := trace.SpanContextFromContext(ctx); s.IsValid() && s.IsRemote() {
opts = append(opts, trace.WithLinks(trace.Link{SpanContext: s}))
}
}
tracer := h.tracer
if tracer == nil {
if span := trace.SpanFromContext(r.Context()); span.SpanContext().IsValid() {
tracer = newTracer(span.TracerProvider())
} else {
tracer = newTracer(otel.GetTracerProvider())
}
}
ctx, span := tracer.Start(ctx, h.spanNameFormatter(h.operation, r), opts...)
defer span.End()
readRecordFunc := func(int64) {}
if h.readEvent {
readRecordFunc = func(n int64) {
span.AddEvent("read", trace.WithAttributes(ReadBytesKey.Int64(n)))
}
}
var bw bodyWrapper
// if request body is nil or NoBody, we don't want to mutate the body as it
// will affect the identity of it in an unforeseeable way because we assert
// ReadCloser fulfills a certain interface and it is indeed nil or NoBody.
if r.Body != nil && r.Body != http.NoBody {
bw.ReadCloser = r.Body
bw.record = readRecordFunc
r.Body = &bw
}
writeRecordFunc := func(int64) {}
if h.writeEvent {
writeRecordFunc = func(n int64) {
span.AddEvent("write", trace.WithAttributes(WroteBytesKey.Int64(n)))
}
}
rww := &respWriterWrapper{
ResponseWriter: w,
record: writeRecordFunc,
ctx: ctx,
props: h.propagators,
statusCode: http.StatusOK, // default status code in case the Handler doesn't write anything
}
// Wrap w to use our ResponseWriter methods while also exposing
// other interfaces that w may implement (http.CloseNotifier,
// http.Flusher, http.Hijacker, http.Pusher, io.ReaderFrom).
w = httpsnoop.Wrap(w, httpsnoop.Hooks{
Header: func(httpsnoop.HeaderFunc) httpsnoop.HeaderFunc {
return rww.Header
},
Write: func(httpsnoop.WriteFunc) httpsnoop.WriteFunc {
return rww.Write
},
WriteHeader: func(httpsnoop.WriteHeaderFunc) httpsnoop.WriteHeaderFunc {
return rww.WriteHeader
},
Flush: func(httpsnoop.FlushFunc) httpsnoop.FlushFunc {
return rww.Flush
},
})
labeler, found := LabelerFromContext(ctx)
if !found {
ctx = ContextWithLabeler(ctx, labeler)
}
next.ServeHTTP(w, r.WithContext(ctx))
span.SetStatus(semconv.ServerStatus(rww.statusCode))
span.SetAttributes(h.traceSemconv.ResponseTraceAttrs(semconv.ResponseTelemetry{
StatusCode: rww.statusCode,
ReadBytes: bw.read.Load(),
ReadError: bw.err,
WriteBytes: rww.written,
WriteError: rww.err,
})...)
// Add metrics
attributes := append(labeler.Get(), semconvutil.HTTPServerRequestMetrics(h.server, r)...)
if rww.statusCode > 0 {
attributes = append(attributes, semconv.HTTPStatusCode(rww.statusCode))
}
o := metric.WithAttributeSet(attribute.NewSet(attributes...))
addOpts := []metric.AddOption{o} // Allocate vararg slice once.
h.requestBytesCounter.Add(ctx, bw.read.Load(), addOpts...)
h.responseBytesCounter.Add(ctx, rww.written, addOpts...)
// Use floating point division here for higher precision (instead of Millisecond method).
elapsedTime := float64(time.Since(requestStartTime)) / float64(time.Millisecond)
h.serverLatencyMeasure.Record(ctx, elapsedTime, o)
}
// WithRouteTag annotates spans and metrics with the provided route name
// with HTTP route attribute.
func WithRouteTag(route string, h http.Handler) http.Handler {
attr := semconv.NewHTTPServer().Route(route)
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
span := trace.SpanFromContext(r.Context())
span.SetAttributes(attr)
labeler, _ := LabelerFromContext(r.Context())
labeler.Add(attr)
h.ServeHTTP(w, r)
})
}

View File

@ -0,0 +1,82 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconv // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv"
import (
"fmt"
"net/http"
"os"
"strings"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
)
type ResponseTelemetry struct {
StatusCode int
ReadBytes int64
ReadError error
WriteBytes int64
WriteError error
}
type HTTPServer struct {
duplicate bool
}
// RequestTraceAttrs returns trace attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
func (s HTTPServer) RequestTraceAttrs(server string, req *http.Request) []attribute.KeyValue {
if s.duplicate {
return append(oldHTTPServer{}.RequestTraceAttrs(server, req), newHTTPServer{}.RequestTraceAttrs(server, req)...)
}
return oldHTTPServer{}.RequestTraceAttrs(server, req)
}
// ResponseTraceAttrs returns trace attributes for telemetry from an HTTP response.
//
// If any of the fields in the ResponseTelemetry are not set the attribute will be omitted.
func (s HTTPServer) ResponseTraceAttrs(resp ResponseTelemetry) []attribute.KeyValue {
if s.duplicate {
return append(oldHTTPServer{}.ResponseTraceAttrs(resp), newHTTPServer{}.ResponseTraceAttrs(resp)...)
}
return oldHTTPServer{}.ResponseTraceAttrs(resp)
}
// Route returns the attribute for the route.
func (s HTTPServer) Route(route string) attribute.KeyValue {
return oldHTTPServer{}.Route(route)
}
func NewHTTPServer() HTTPServer {
env := strings.ToLower(os.Getenv("OTEL_HTTP_CLIENT_COMPATIBILITY_MODE"))
return HTTPServer{duplicate: env == "http/dup"}
}
// ServerStatus returns a span status code and message for an HTTP status code
// value returned by a server. Status codes in the 400-499 range are not
// returned as errors.
func ServerStatus(code int) (codes.Code, string) {
if code < 100 || code >= 600 {
return codes.Error, fmt.Sprintf("Invalid HTTP status code %d", code)
}
if code >= 500 {
return codes.Error, ""
}
return codes.Unset, ""
}

View File

@ -0,0 +1,91 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconv // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv"
import (
"net"
"net/http"
"strconv"
"strings"
"go.opentelemetry.io/otel/attribute"
semconvNew "go.opentelemetry.io/otel/semconv/v1.24.0"
)
// splitHostPort splits a network address hostport of the form "host",
// "host%zone", "[host]", "[host%zone], "host:port", "host%zone:port",
// "[host]:port", "[host%zone]:port", or ":port" into host or host%zone and
// port.
//
// An empty host is returned if it is not provided or unparsable. A negative
// port is returned if it is not provided or unparsable.
func splitHostPort(hostport string) (host string, port int) {
port = -1
if strings.HasPrefix(hostport, "[") {
addrEnd := strings.LastIndex(hostport, "]")
if addrEnd < 0 {
// Invalid hostport.
return
}
if i := strings.LastIndex(hostport[addrEnd:], ":"); i < 0 {
host = hostport[1:addrEnd]
return
}
} else {
if i := strings.LastIndex(hostport, ":"); i < 0 {
host = hostport
return
}
}
host, pStr, err := net.SplitHostPort(hostport)
if err != nil {
return
}
p, err := strconv.ParseUint(pStr, 10, 16)
if err != nil {
return
}
return host, int(p)
}
func requiredHTTPPort(https bool, port int) int { // nolint:revive
if https {
if port > 0 && port != 443 {
return port
}
} else {
if port > 0 && port != 80 {
return port
}
}
return -1
}
func serverClientIP(xForwardedFor string) string {
if idx := strings.Index(xForwardedFor, ","); idx >= 0 {
xForwardedFor = xForwardedFor[:idx]
}
return xForwardedFor
}
func netProtocol(proto string) (name string, version string) {
name, version, _ = strings.Cut(proto, "/")
name = strings.ToLower(name)
return name, version
}
var methodLookup = map[string]attribute.KeyValue{
http.MethodConnect: semconvNew.HTTPRequestMethodConnect,
http.MethodDelete: semconvNew.HTTPRequestMethodDelete,
http.MethodGet: semconvNew.HTTPRequestMethodGet,
http.MethodHead: semconvNew.HTTPRequestMethodHead,
http.MethodOptions: semconvNew.HTTPRequestMethodOptions,
http.MethodPatch: semconvNew.HTTPRequestMethodPatch,
http.MethodPost: semconvNew.HTTPRequestMethodPost,
http.MethodPut: semconvNew.HTTPRequestMethodPut,
http.MethodTrace: semconvNew.HTTPRequestMethodTrace,
}

View File

@ -0,0 +1,74 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconv // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv"
import (
"errors"
"io"
"net/http"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.20.0"
)
type oldHTTPServer struct{}
// RequestTraceAttrs returns trace attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
func (o oldHTTPServer) RequestTraceAttrs(server string, req *http.Request) []attribute.KeyValue {
return semconvutil.HTTPServerRequest(server, req)
}
// ResponseTraceAttrs returns trace attributes for telemetry from an HTTP response.
//
// If any of the fields in the ResponseTelemetry are not set the attribute will be omitted.
func (o oldHTTPServer) ResponseTraceAttrs(resp ResponseTelemetry) []attribute.KeyValue {
attributes := []attribute.KeyValue{}
if resp.ReadBytes > 0 {
attributes = append(attributes, semconv.HTTPRequestContentLength(int(resp.ReadBytes)))
}
if resp.ReadError != nil && !errors.Is(resp.ReadError, io.EOF) {
// This is not in the semantic conventions, but is historically provided
attributes = append(attributes, attribute.String("http.read_error", resp.ReadError.Error()))
}
if resp.WriteBytes > 0 {
attributes = append(attributes, semconv.HTTPResponseContentLength(int(resp.WriteBytes)))
}
if resp.StatusCode > 0 {
attributes = append(attributes, semconv.HTTPStatusCode(resp.StatusCode))
}
if resp.WriteError != nil && !errors.Is(resp.WriteError, io.EOF) {
// This is not in the semantic conventions, but is historically provided
attributes = append(attributes, attribute.String("http.write_error", resp.WriteError.Error()))
}
return attributes
}
// Route returns the attribute for the route.
func (o oldHTTPServer) Route(route string) attribute.KeyValue {
return semconv.HTTPRoute(route)
}
// HTTPStatusCode returns the attribute for the HTTP status code.
// This is a temporary function needed by metrics. This will be removed when MetricsRequest is added.
func HTTPStatusCode(status int) attribute.KeyValue {
return semconv.HTTPStatusCode(status)
}

View File

@ -0,0 +1,197 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconv // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv"
import (
"net/http"
"strings"
"go.opentelemetry.io/otel/attribute"
semconvNew "go.opentelemetry.io/otel/semconv/v1.24.0"
)
type newHTTPServer struct{}
// TraceRequest returns trace attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
func (n newHTTPServer) RequestTraceAttrs(server string, req *http.Request) []attribute.KeyValue {
count := 3 // ServerAddress, Method, Scheme
var host string
var p int
if server == "" {
host, p = splitHostPort(req.Host)
} else {
// Prioritize the primary server name.
host, p = splitHostPort(server)
if p < 0 {
_, p = splitHostPort(req.Host)
}
}
hostPort := requiredHTTPPort(req.TLS != nil, p)
if hostPort > 0 {
count++
}
method, methodOriginal := n.method(req.Method)
if methodOriginal != (attribute.KeyValue{}) {
count++
}
scheme := n.scheme(req.TLS != nil)
if peer, peerPort := splitHostPort(req.RemoteAddr); peer != "" {
// The Go HTTP server sets RemoteAddr to "IP:port", this will not be a
// file-path that would be interpreted with a sock family.
count++
if peerPort > 0 {
count++
}
}
useragent := req.UserAgent()
if useragent != "" {
count++
}
clientIP := serverClientIP(req.Header.Get("X-Forwarded-For"))
if clientIP != "" {
count++
}
if req.URL != nil && req.URL.Path != "" {
count++
}
protoName, protoVersion := netProtocol(req.Proto)
if protoName != "" && protoName != "http" {
count++
}
if protoVersion != "" {
count++
}
attrs := make([]attribute.KeyValue, 0, count)
attrs = append(attrs,
semconvNew.ServerAddress(host),
method,
scheme,
)
if hostPort > 0 {
attrs = append(attrs, semconvNew.ServerPort(hostPort))
}
if methodOriginal != (attribute.KeyValue{}) {
attrs = append(attrs, methodOriginal)
}
if peer, peerPort := splitHostPort(req.RemoteAddr); peer != "" {
// The Go HTTP server sets RemoteAddr to "IP:port", this will not be a
// file-path that would be interpreted with a sock family.
attrs = append(attrs, semconvNew.NetworkPeerAddress(peer))
if peerPort > 0 {
attrs = append(attrs, semconvNew.NetworkPeerPort(peerPort))
}
}
if useragent := req.UserAgent(); useragent != "" {
attrs = append(attrs, semconvNew.UserAgentOriginal(useragent))
}
if clientIP != "" {
attrs = append(attrs, semconvNew.ClientAddress(clientIP))
}
if req.URL != nil && req.URL.Path != "" {
attrs = append(attrs, semconvNew.URLPath(req.URL.Path))
}
if protoName != "" && protoName != "http" {
attrs = append(attrs, semconvNew.NetworkProtocolName(protoName))
}
if protoVersion != "" {
attrs = append(attrs, semconvNew.NetworkProtocolVersion(protoVersion))
}
return attrs
}
func (n newHTTPServer) method(method string) (attribute.KeyValue, attribute.KeyValue) {
if method == "" {
return semconvNew.HTTPRequestMethodGet, attribute.KeyValue{}
}
if attr, ok := methodLookup[method]; ok {
return attr, attribute.KeyValue{}
}
orig := semconvNew.HTTPRequestMethodOriginal(method)
if attr, ok := methodLookup[strings.ToUpper(method)]; ok {
return attr, orig
}
return semconvNew.HTTPRequestMethodGet, orig
}
func (n newHTTPServer) scheme(https bool) attribute.KeyValue { // nolint:revive
if https {
return semconvNew.URLScheme("https")
}
return semconvNew.URLScheme("http")
}
// TraceResponse returns trace attributes for telemetry from an HTTP response.
//
// If any of the fields in the ResponseTelemetry are not set the attribute will be omitted.
func (n newHTTPServer) ResponseTraceAttrs(resp ResponseTelemetry) []attribute.KeyValue {
var count int
if resp.ReadBytes > 0 {
count++
}
if resp.WriteBytes > 0 {
count++
}
if resp.StatusCode > 0 {
count++
}
attributes := make([]attribute.KeyValue, 0, count)
if resp.ReadBytes > 0 {
attributes = append(attributes,
semconvNew.HTTPRequestBodySize(int(resp.ReadBytes)),
)
}
if resp.WriteBytes > 0 {
attributes = append(attributes,
semconvNew.HTTPResponseBodySize(int(resp.WriteBytes)),
)
}
if resp.StatusCode > 0 {
attributes = append(attributes,
semconvNew.HTTPResponseStatusCode(resp.StatusCode),
)
}
return attributes
}
// Route returns the attribute for the route.
func (n newHTTPServer) Route(route string) attribute.KeyValue {
return semconvNew.HTTPRoute(route)
}

View File

@ -0,0 +1,10 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconvutil // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
// Generate semconvutil package:
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/httpconv_test.go.tmpl "--data={}" --out=httpconv_test.go
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/httpconv.go.tmpl "--data={}" --out=httpconv.go
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/netconv_test.go.tmpl "--data={}" --out=netconv_test.go
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/netconv.go.tmpl "--data={}" --out=netconv.go

View File

@ -0,0 +1,575 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/semconvutil/httpconv.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconvutil // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
import (
"fmt"
"net/http"
"strings"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
semconv "go.opentelemetry.io/otel/semconv/v1.20.0"
)
// HTTPClientResponse returns trace attributes for an HTTP response received by a
// client from a server. It will return the following attributes if the related
// values are defined in resp: "http.status.code",
// "http.response_content_length".
//
// This does not add all OpenTelemetry required attributes for an HTTP event,
// it assumes ClientRequest was used to create the span with a complete set of
// attributes. If a complete set of attributes can be generated using the
// request contained in resp. For example:
//
// append(HTTPClientResponse(resp), ClientRequest(resp.Request)...)
func HTTPClientResponse(resp *http.Response) []attribute.KeyValue {
return hc.ClientResponse(resp)
}
// HTTPClientRequest returns trace attributes for an HTTP request made by a client.
// The following attributes are always returned: "http.url", "http.method",
// "net.peer.name". The following attributes are returned if the related values
// are defined in req: "net.peer.port", "user_agent.original",
// "http.request_content_length".
func HTTPClientRequest(req *http.Request) []attribute.KeyValue {
return hc.ClientRequest(req)
}
// HTTPClientRequestMetrics returns metric attributes for an HTTP request made by a client.
// The following attributes are always returned: "http.method", "net.peer.name".
// The following attributes are returned if the
// related values are defined in req: "net.peer.port".
func HTTPClientRequestMetrics(req *http.Request) []attribute.KeyValue {
return hc.ClientRequestMetrics(req)
}
// HTTPClientStatus returns a span status code and message for an HTTP status code
// value received by a client.
func HTTPClientStatus(code int) (codes.Code, string) {
return hc.ClientStatus(code)
}
// HTTPServerRequest returns trace attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "http.target", "net.host.name". The following attributes are returned if
// they related values are defined in req: "net.host.port", "net.sock.peer.addr",
// "net.sock.peer.port", "user_agent.original", "http.client_ip".
func HTTPServerRequest(server string, req *http.Request) []attribute.KeyValue {
return hc.ServerRequest(server, req)
}
// HTTPServerRequestMetrics returns metric attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "net.host.name". The following attributes are returned if they related
// values are defined in req: "net.host.port".
func HTTPServerRequestMetrics(server string, req *http.Request) []attribute.KeyValue {
return hc.ServerRequestMetrics(server, req)
}
// HTTPServerStatus returns a span status code and message for an HTTP status code
// value returned by a server. Status codes in the 400-499 range are not
// returned as errors.
func HTTPServerStatus(code int) (codes.Code, string) {
return hc.ServerStatus(code)
}
// httpConv are the HTTP semantic convention attributes defined for a version
// of the OpenTelemetry specification.
type httpConv struct {
NetConv *netConv
HTTPClientIPKey attribute.Key
HTTPMethodKey attribute.Key
HTTPRequestContentLengthKey attribute.Key
HTTPResponseContentLengthKey attribute.Key
HTTPRouteKey attribute.Key
HTTPSchemeHTTP attribute.KeyValue
HTTPSchemeHTTPS attribute.KeyValue
HTTPStatusCodeKey attribute.Key
HTTPTargetKey attribute.Key
HTTPURLKey attribute.Key
UserAgentOriginalKey attribute.Key
}
var hc = &httpConv{
NetConv: nc,
HTTPClientIPKey: semconv.HTTPClientIPKey,
HTTPMethodKey: semconv.HTTPMethodKey,
HTTPRequestContentLengthKey: semconv.HTTPRequestContentLengthKey,
HTTPResponseContentLengthKey: semconv.HTTPResponseContentLengthKey,
HTTPRouteKey: semconv.HTTPRouteKey,
HTTPSchemeHTTP: semconv.HTTPSchemeHTTP,
HTTPSchemeHTTPS: semconv.HTTPSchemeHTTPS,
HTTPStatusCodeKey: semconv.HTTPStatusCodeKey,
HTTPTargetKey: semconv.HTTPTargetKey,
HTTPURLKey: semconv.HTTPURLKey,
UserAgentOriginalKey: semconv.UserAgentOriginalKey,
}
// ClientResponse returns attributes for an HTTP response received by a client
// from a server. The following attributes are returned if the related values
// are defined in resp: "http.status.code", "http.response_content_length".
//
// This does not add all OpenTelemetry required attributes for an HTTP event,
// it assumes ClientRequest was used to create the span with a complete set of
// attributes. If a complete set of attributes can be generated using the
// request contained in resp. For example:
//
// append(ClientResponse(resp), ClientRequest(resp.Request)...)
func (c *httpConv) ClientResponse(resp *http.Response) []attribute.KeyValue {
/* The following semantic conventions are returned if present:
http.status_code int
http.response_content_length int
*/
var n int
if resp.StatusCode > 0 {
n++
}
if resp.ContentLength > 0 {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
if resp.StatusCode > 0 {
attrs = append(attrs, c.HTTPStatusCodeKey.Int(resp.StatusCode))
}
if resp.ContentLength > 0 {
attrs = append(attrs, c.HTTPResponseContentLengthKey.Int(int(resp.ContentLength)))
}
return attrs
}
// ClientRequest returns attributes for an HTTP request made by a client. The
// following attributes are always returned: "http.url", "http.method",
// "net.peer.name". The following attributes are returned if the related values
// are defined in req: "net.peer.port", "user_agent.original",
// "http.request_content_length", "user_agent.original".
func (c *httpConv) ClientRequest(req *http.Request) []attribute.KeyValue {
/* The following semantic conventions are returned if present:
http.method string
user_agent.original string
http.url string
net.peer.name string
net.peer.port int
http.request_content_length int
*/
/* The following semantic conventions are not returned:
http.status_code This requires the response. See ClientResponse.
http.response_content_length This requires the response. See ClientResponse.
net.sock.family This requires the socket used.
net.sock.peer.addr This requires the socket used.
net.sock.peer.name This requires the socket used.
net.sock.peer.port This requires the socket used.
http.resend_count This is something outside of a single request.
net.protocol.name The value is the Request is ignored, and the go client will always use "http".
net.protocol.version The value in the Request is ignored, and the go client will always use 1.1 or 2.0.
*/
n := 3 // URL, peer name, proto, and method.
var h string
if req.URL != nil {
h = req.URL.Host
}
peer, p := firstHostPort(h, req.Header.Get("Host"))
port := requiredHTTPPort(req.URL != nil && req.URL.Scheme == "https", p)
if port > 0 {
n++
}
useragent := req.UserAgent()
if useragent != "" {
n++
}
if req.ContentLength > 0 {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.method(req.Method))
var u string
if req.URL != nil {
// Remove any username/password info that may be in the URL.
userinfo := req.URL.User
req.URL.User = nil
u = req.URL.String()
// Restore any username/password info that was removed.
req.URL.User = userinfo
}
attrs = append(attrs, c.HTTPURLKey.String(u))
attrs = append(attrs, c.NetConv.PeerName(peer))
if port > 0 {
attrs = append(attrs, c.NetConv.PeerPort(port))
}
if useragent != "" {
attrs = append(attrs, c.UserAgentOriginalKey.String(useragent))
}
if l := req.ContentLength; l > 0 {
attrs = append(attrs, c.HTTPRequestContentLengthKey.Int64(l))
}
return attrs
}
// ClientRequestMetrics returns metric attributes for an HTTP request made by a client. The
// following attributes are always returned: "http.method", "net.peer.name".
// The following attributes are returned if the related values
// are defined in req: "net.peer.port".
func (c *httpConv) ClientRequestMetrics(req *http.Request) []attribute.KeyValue {
/* The following semantic conventions are returned if present:
http.method string
net.peer.name string
net.peer.port int
*/
n := 2 // method, peer name.
var h string
if req.URL != nil {
h = req.URL.Host
}
peer, p := firstHostPort(h, req.Header.Get("Host"))
port := requiredHTTPPort(req.URL != nil && req.URL.Scheme == "https", p)
if port > 0 {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.method(req.Method), c.NetConv.PeerName(peer))
if port > 0 {
attrs = append(attrs, c.NetConv.PeerPort(port))
}
return attrs
}
// ServerRequest returns attributes for an HTTP request received by a server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "http.target", "net.host.name". The following attributes are returned if they
// related values are defined in req: "net.host.port", "net.sock.peer.addr",
// "net.sock.peer.port", "user_agent.original", "http.client_ip",
// "net.protocol.name", "net.protocol.version".
func (c *httpConv) ServerRequest(server string, req *http.Request) []attribute.KeyValue {
/* The following semantic conventions are returned if present:
http.method string
http.scheme string
net.host.name string
net.host.port int
net.sock.peer.addr string
net.sock.peer.port int
user_agent.original string
http.client_ip string
net.protocol.name string Note: not set if the value is "http".
net.protocol.version string
http.target string Note: doesn't include the query parameter.
*/
/* The following semantic conventions are not returned:
http.status_code This requires the response.
http.request_content_length This requires the len() of body, which can mutate it.
http.response_content_length This requires the response.
http.route This is not available.
net.sock.peer.name This would require a DNS lookup.
net.sock.host.addr The request doesn't have access to the underlying socket.
net.sock.host.port The request doesn't have access to the underlying socket.
*/
n := 4 // Method, scheme, proto, and host name.
var host string
var p int
if server == "" {
host, p = splitHostPort(req.Host)
} else {
// Prioritize the primary server name.
host, p = splitHostPort(server)
if p < 0 {
_, p = splitHostPort(req.Host)
}
}
hostPort := requiredHTTPPort(req.TLS != nil, p)
if hostPort > 0 {
n++
}
peer, peerPort := splitHostPort(req.RemoteAddr)
if peer != "" {
n++
if peerPort > 0 {
n++
}
}
useragent := req.UserAgent()
if useragent != "" {
n++
}
clientIP := serverClientIP(req.Header.Get("X-Forwarded-For"))
if clientIP != "" {
n++
}
var target string
if req.URL != nil {
target = req.URL.Path
if target != "" {
n++
}
}
protoName, protoVersion := netProtocol(req.Proto)
if protoName != "" && protoName != "http" {
n++
}
if protoVersion != "" {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.method(req.Method))
attrs = append(attrs, c.scheme(req.TLS != nil))
attrs = append(attrs, c.NetConv.HostName(host))
if hostPort > 0 {
attrs = append(attrs, c.NetConv.HostPort(hostPort))
}
if peer != "" {
// The Go HTTP server sets RemoteAddr to "IP:port", this will not be a
// file-path that would be interpreted with a sock family.
attrs = append(attrs, c.NetConv.SockPeerAddr(peer))
if peerPort > 0 {
attrs = append(attrs, c.NetConv.SockPeerPort(peerPort))
}
}
if useragent != "" {
attrs = append(attrs, c.UserAgentOriginalKey.String(useragent))
}
if clientIP != "" {
attrs = append(attrs, c.HTTPClientIPKey.String(clientIP))
}
if target != "" {
attrs = append(attrs, c.HTTPTargetKey.String(target))
}
if protoName != "" && protoName != "http" {
attrs = append(attrs, c.NetConv.NetProtocolName.String(protoName))
}
if protoVersion != "" {
attrs = append(attrs, c.NetConv.NetProtocolVersion.String(protoVersion))
}
return attrs
}
// ServerRequestMetrics returns metric attributes for an HTTP request received
// by a server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "net.host.name". The following attributes are returned if they related
// values are defined in req: "net.host.port".
func (c *httpConv) ServerRequestMetrics(server string, req *http.Request) []attribute.KeyValue {
/* The following semantic conventions are returned if present:
http.scheme string
http.route string
http.method string
http.status_code int
net.host.name string
net.host.port int
net.protocol.name string Note: not set if the value is "http".
net.protocol.version string
*/
n := 3 // Method, scheme, and host name.
var host string
var p int
if server == "" {
host, p = splitHostPort(req.Host)
} else {
// Prioritize the primary server name.
host, p = splitHostPort(server)
if p < 0 {
_, p = splitHostPort(req.Host)
}
}
hostPort := requiredHTTPPort(req.TLS != nil, p)
if hostPort > 0 {
n++
}
protoName, protoVersion := netProtocol(req.Proto)
if protoName != "" {
n++
}
if protoVersion != "" {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.methodMetric(req.Method))
attrs = append(attrs, c.scheme(req.TLS != nil))
attrs = append(attrs, c.NetConv.HostName(host))
if hostPort > 0 {
attrs = append(attrs, c.NetConv.HostPort(hostPort))
}
if protoName != "" {
attrs = append(attrs, c.NetConv.NetProtocolName.String(protoName))
}
if protoVersion != "" {
attrs = append(attrs, c.NetConv.NetProtocolVersion.String(protoVersion))
}
return attrs
}
func (c *httpConv) method(method string) attribute.KeyValue {
if method == "" {
return c.HTTPMethodKey.String(http.MethodGet)
}
return c.HTTPMethodKey.String(method)
}
func (c *httpConv) methodMetric(method string) attribute.KeyValue {
method = strings.ToUpper(method)
switch method {
case http.MethodConnect, http.MethodDelete, http.MethodGet, http.MethodHead, http.MethodOptions, http.MethodPatch, http.MethodPost, http.MethodPut, http.MethodTrace:
default:
method = "_OTHER"
}
return c.HTTPMethodKey.String(method)
}
func (c *httpConv) scheme(https bool) attribute.KeyValue { // nolint:revive
if https {
return c.HTTPSchemeHTTPS
}
return c.HTTPSchemeHTTP
}
func serverClientIP(xForwardedFor string) string {
if idx := strings.Index(xForwardedFor, ","); idx >= 0 {
xForwardedFor = xForwardedFor[:idx]
}
return xForwardedFor
}
func requiredHTTPPort(https bool, port int) int { // nolint:revive
if https {
if port > 0 && port != 443 {
return port
}
} else {
if port > 0 && port != 80 {
return port
}
}
return -1
}
// Return the request host and port from the first non-empty source.
func firstHostPort(source ...string) (host string, port int) {
for _, hostport := range source {
host, port = splitHostPort(hostport)
if host != "" || port > 0 {
break
}
}
return
}
// ClientStatus returns a span status code and message for an HTTP status code
// value received by a client.
func (c *httpConv) ClientStatus(code int) (codes.Code, string) {
if code < 100 || code >= 600 {
return codes.Error, fmt.Sprintf("Invalid HTTP status code %d", code)
}
if code >= 400 {
return codes.Error, ""
}
return codes.Unset, ""
}
// ServerStatus returns a span status code and message for an HTTP status code
// value returned by a server. Status codes in the 400-499 range are not
// returned as errors.
func (c *httpConv) ServerStatus(code int) (codes.Code, string) {
if code < 100 || code >= 600 {
return codes.Error, fmt.Sprintf("Invalid HTTP status code %d", code)
}
if code >= 500 {
return codes.Error, ""
}
return codes.Unset, ""
}

View File

@ -0,0 +1,205 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/semconvutil/netconv.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconvutil // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
import (
"net"
"strconv"
"strings"
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.20.0"
)
// NetTransport returns a trace attribute describing the transport protocol of the
// passed network. See the net.Dial for information about acceptable network
// values.
func NetTransport(network string) attribute.KeyValue {
return nc.Transport(network)
}
// netConv are the network semantic convention attributes defined for a version
// of the OpenTelemetry specification.
type netConv struct {
NetHostNameKey attribute.Key
NetHostPortKey attribute.Key
NetPeerNameKey attribute.Key
NetPeerPortKey attribute.Key
NetProtocolName attribute.Key
NetProtocolVersion attribute.Key
NetSockFamilyKey attribute.Key
NetSockPeerAddrKey attribute.Key
NetSockPeerPortKey attribute.Key
NetSockHostAddrKey attribute.Key
NetSockHostPortKey attribute.Key
NetTransportOther attribute.KeyValue
NetTransportTCP attribute.KeyValue
NetTransportUDP attribute.KeyValue
NetTransportInProc attribute.KeyValue
}
var nc = &netConv{
NetHostNameKey: semconv.NetHostNameKey,
NetHostPortKey: semconv.NetHostPortKey,
NetPeerNameKey: semconv.NetPeerNameKey,
NetPeerPortKey: semconv.NetPeerPortKey,
NetProtocolName: semconv.NetProtocolNameKey,
NetProtocolVersion: semconv.NetProtocolVersionKey,
NetSockFamilyKey: semconv.NetSockFamilyKey,
NetSockPeerAddrKey: semconv.NetSockPeerAddrKey,
NetSockPeerPortKey: semconv.NetSockPeerPortKey,
NetSockHostAddrKey: semconv.NetSockHostAddrKey,
NetSockHostPortKey: semconv.NetSockHostPortKey,
NetTransportOther: semconv.NetTransportOther,
NetTransportTCP: semconv.NetTransportTCP,
NetTransportUDP: semconv.NetTransportUDP,
NetTransportInProc: semconv.NetTransportInProc,
}
func (c *netConv) Transport(network string) attribute.KeyValue {
switch network {
case "tcp", "tcp4", "tcp6":
return c.NetTransportTCP
case "udp", "udp4", "udp6":
return c.NetTransportUDP
case "unix", "unixgram", "unixpacket":
return c.NetTransportInProc
default:
// "ip:*", "ip4:*", and "ip6:*" all are considered other.
return c.NetTransportOther
}
}
// Host returns attributes for a network host address.
func (c *netConv) Host(address string) []attribute.KeyValue {
h, p := splitHostPort(address)
var n int
if h != "" {
n++
if p > 0 {
n++
}
}
if n == 0 {
return nil
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.HostName(h))
if p > 0 {
attrs = append(attrs, c.HostPort(p))
}
return attrs
}
func (c *netConv) HostName(name string) attribute.KeyValue {
return c.NetHostNameKey.String(name)
}
func (c *netConv) HostPort(port int) attribute.KeyValue {
return c.NetHostPortKey.Int(port)
}
func family(network, address string) string {
switch network {
case "unix", "unixgram", "unixpacket":
return "unix"
default:
if ip := net.ParseIP(address); ip != nil {
if ip.To4() == nil {
return "inet6"
}
return "inet"
}
}
return ""
}
// Peer returns attributes for a network peer address.
func (c *netConv) Peer(address string) []attribute.KeyValue {
h, p := splitHostPort(address)
var n int
if h != "" {
n++
if p > 0 {
n++
}
}
if n == 0 {
return nil
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.PeerName(h))
if p > 0 {
attrs = append(attrs, c.PeerPort(p))
}
return attrs
}
func (c *netConv) PeerName(name string) attribute.KeyValue {
return c.NetPeerNameKey.String(name)
}
func (c *netConv) PeerPort(port int) attribute.KeyValue {
return c.NetPeerPortKey.Int(port)
}
func (c *netConv) SockPeerAddr(addr string) attribute.KeyValue {
return c.NetSockPeerAddrKey.String(addr)
}
func (c *netConv) SockPeerPort(port int) attribute.KeyValue {
return c.NetSockPeerPortKey.Int(port)
}
// splitHostPort splits a network address hostport of the form "host",
// "host%zone", "[host]", "[host%zone], "host:port", "host%zone:port",
// "[host]:port", "[host%zone]:port", or ":port" into host or host%zone and
// port.
//
// An empty host is returned if it is not provided or unparsable. A negative
// port is returned if it is not provided or unparsable.
func splitHostPort(hostport string) (host string, port int) {
port = -1
if strings.HasPrefix(hostport, "[") {
addrEnd := strings.LastIndex(hostport, "]")
if addrEnd < 0 {
// Invalid hostport.
return
}
if i := strings.LastIndex(hostport[addrEnd:], ":"); i < 0 {
host = hostport[1:addrEnd]
return
}
} else {
if i := strings.LastIndex(hostport, ":"); i < 0 {
host = hostport
return
}
}
host, pStr, err := net.SplitHostPort(hostport)
if err != nil {
return
}
p, err := strconv.ParseUint(pStr, 10, 16)
if err != nil {
return
}
return host, int(p)
}
func netProtocol(proto string) (name string, version string) {
name, version, _ = strings.Cut(proto, "/")
name = strings.ToLower(name)
return name, version
}

View File

@ -0,0 +1,58 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"sync"
"go.opentelemetry.io/otel/attribute"
)
// Labeler is used to allow instrumented HTTP handlers to add custom attributes to
// the metrics recorded by the net/http instrumentation.
type Labeler struct {
mu sync.Mutex
attributes []attribute.KeyValue
}
// Add attributes to a Labeler.
func (l *Labeler) Add(ls ...attribute.KeyValue) {
l.mu.Lock()
defer l.mu.Unlock()
l.attributes = append(l.attributes, ls...)
}
// Get returns a copy of the attributes added to the Labeler.
func (l *Labeler) Get() []attribute.KeyValue {
l.mu.Lock()
defer l.mu.Unlock()
ret := make([]attribute.KeyValue, len(l.attributes))
copy(ret, l.attributes)
return ret
}
type labelerContextKeyType int
const lablelerContextKey labelerContextKeyType = 0
// ContextWithLabeler returns a new context with the provided Labeler instance.
// Attributes added to the specified labeler will be injected into metrics
// emitted by the instrumentation. Only one labeller can be injected into the
// context. Injecting it multiple times will override the previous calls.
func ContextWithLabeler(parent context.Context, l *Labeler) context.Context {
return context.WithValue(parent, lablelerContextKey, l)
}
// LabelerFromContext retrieves a Labeler instance from the provided context if
// one is available. If no Labeler was found in the provided context a new, empty
// Labeler is returned and the second return value is false. In this case it is
// safe to use the Labeler but any attributes added to it will not be used.
func LabelerFromContext(ctx context.Context) (*Labeler, bool) {
l, ok := ctx.Value(lablelerContextKey).(*Labeler)
if !ok {
l = &Labeler{}
}
return l, ok
}

View File

@ -0,0 +1,277 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"io"
"net/http"
"net/http/httptrace"
"sync/atomic"
"time"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/propagation"
semconv "go.opentelemetry.io/otel/semconv/v1.20.0"
"go.opentelemetry.io/otel/trace"
)
// Transport implements the http.RoundTripper interface and wraps
// outbound HTTP(S) requests with a span and enriches it with metrics.
type Transport struct {
rt http.RoundTripper
tracer trace.Tracer
meter metric.Meter
propagators propagation.TextMapPropagator
spanStartOptions []trace.SpanStartOption
filters []Filter
spanNameFormatter func(string, *http.Request) string
clientTrace func(context.Context) *httptrace.ClientTrace
requestBytesCounter metric.Int64Counter
responseBytesCounter metric.Int64Counter
latencyMeasure metric.Float64Histogram
}
var _ http.RoundTripper = &Transport{}
// NewTransport wraps the provided http.RoundTripper with one that
// starts a span, injects the span context into the outbound request headers,
// and enriches it with metrics.
//
// If the provided http.RoundTripper is nil, http.DefaultTransport will be used
// as the base http.RoundTripper.
func NewTransport(base http.RoundTripper, opts ...Option) *Transport {
if base == nil {
base = http.DefaultTransport
}
t := Transport{
rt: base,
}
defaultOpts := []Option{
WithSpanOptions(trace.WithSpanKind(trace.SpanKindClient)),
WithSpanNameFormatter(defaultTransportFormatter),
}
c := newConfig(append(defaultOpts, opts...)...)
t.applyConfig(c)
t.createMeasures()
return &t
}
func (t *Transport) applyConfig(c *config) {
t.tracer = c.Tracer
t.meter = c.Meter
t.propagators = c.Propagators
t.spanStartOptions = c.SpanStartOptions
t.filters = c.Filters
t.spanNameFormatter = c.SpanNameFormatter
t.clientTrace = c.ClientTrace
}
func (t *Transport) createMeasures() {
var err error
t.requestBytesCounter, err = t.meter.Int64Counter(
clientRequestSize,
metric.WithUnit("By"),
metric.WithDescription("Measures the size of HTTP request messages."),
)
handleErr(err)
t.responseBytesCounter, err = t.meter.Int64Counter(
clientResponseSize,
metric.WithUnit("By"),
metric.WithDescription("Measures the size of HTTP response messages."),
)
handleErr(err)
t.latencyMeasure, err = t.meter.Float64Histogram(
clientDuration,
metric.WithUnit("ms"),
metric.WithDescription("Measures the duration of outbound HTTP requests."),
)
handleErr(err)
}
func defaultTransportFormatter(_ string, r *http.Request) string {
return "HTTP " + r.Method
}
// RoundTrip creates a Span and propagates its context via the provided request's headers
// before handing the request to the configured base RoundTripper. The created span will
// end when the response body is closed or when a read from the body returns io.EOF.
func (t *Transport) RoundTrip(r *http.Request) (*http.Response, error) {
requestStartTime := time.Now()
for _, f := range t.filters {
if !f(r) {
// Simply pass through to the base RoundTripper if a filter rejects the request
return t.rt.RoundTrip(r)
}
}
tracer := t.tracer
if tracer == nil {
if span := trace.SpanFromContext(r.Context()); span.SpanContext().IsValid() {
tracer = newTracer(span.TracerProvider())
} else {
tracer = newTracer(otel.GetTracerProvider())
}
}
opts := append([]trace.SpanStartOption{}, t.spanStartOptions...) // start with the configured options
ctx, span := tracer.Start(r.Context(), t.spanNameFormatter("", r), opts...)
if t.clientTrace != nil {
ctx = httptrace.WithClientTrace(ctx, t.clientTrace(ctx))
}
labeler, found := LabelerFromContext(ctx)
if !found {
ctx = ContextWithLabeler(ctx, labeler)
}
r = r.Clone(ctx) // According to RoundTripper spec, we shouldn't modify the origin request.
// use a body wrapper to determine the request size
var bw bodyWrapper
// if request body is nil or NoBody, we don't want to mutate the body as it
// will affect the identity of it in an unforeseeable way because we assert
// ReadCloser fulfills a certain interface and it is indeed nil or NoBody.
if r.Body != nil && r.Body != http.NoBody {
bw.ReadCloser = r.Body
// noop to prevent nil panic. not using this record fun yet.
bw.record = func(int64) {}
r.Body = &bw
}
span.SetAttributes(semconvutil.HTTPClientRequest(r)...)
t.propagators.Inject(ctx, propagation.HeaderCarrier(r.Header))
res, err := t.rt.RoundTrip(r)
if err != nil {
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())
span.End()
return res, err
}
// metrics
metricAttrs := append(labeler.Get(), semconvutil.HTTPClientRequestMetrics(r)...)
if res.StatusCode > 0 {
metricAttrs = append(metricAttrs, semconv.HTTPStatusCode(res.StatusCode))
}
o := metric.WithAttributeSet(attribute.NewSet(metricAttrs...))
addOpts := []metric.AddOption{o} // Allocate vararg slice once.
t.requestBytesCounter.Add(ctx, bw.read.Load(), addOpts...)
// For handling response bytes we leverage a callback when the client reads the http response
readRecordFunc := func(n int64) {
t.responseBytesCounter.Add(ctx, n, addOpts...)
}
// traces
span.SetAttributes(semconvutil.HTTPClientResponse(res)...)
span.SetStatus(semconvutil.HTTPClientStatus(res.StatusCode))
res.Body = newWrappedBody(span, readRecordFunc, res.Body)
// Use floating point division here for higher precision (instead of Millisecond method).
elapsedTime := float64(time.Since(requestStartTime)) / float64(time.Millisecond)
t.latencyMeasure.Record(ctx, elapsedTime, o)
return res, err
}
// newWrappedBody returns a new and appropriately scoped *wrappedBody as an
// io.ReadCloser. If the passed body implements io.Writer, the returned value
// will implement io.ReadWriteCloser.
func newWrappedBody(span trace.Span, record func(n int64), body io.ReadCloser) io.ReadCloser {
// The successful protocol switch responses will have a body that
// implement an io.ReadWriteCloser. Ensure this interface type continues
// to be satisfied if that is the case.
if _, ok := body.(io.ReadWriteCloser); ok {
return &wrappedBody{span: span, record: record, body: body}
}
// Remove the implementation of the io.ReadWriteCloser and only implement
// the io.ReadCloser.
return struct{ io.ReadCloser }{&wrappedBody{span: span, record: record, body: body}}
}
// wrappedBody is the response body type returned by the transport
// instrumentation to complete a span. Errors encountered when using the
// response body are recorded in span tracking the response.
//
// The span tracking the response is ended when this body is closed.
//
// If the response body implements the io.Writer interface (i.e. for
// successful protocol switches), the wrapped body also will.
type wrappedBody struct {
span trace.Span
recorded atomic.Bool
record func(n int64)
body io.ReadCloser
read atomic.Int64
}
var _ io.ReadWriteCloser = &wrappedBody{}
func (wb *wrappedBody) Write(p []byte) (int, error) {
// This will not panic given the guard in newWrappedBody.
n, err := wb.body.(io.Writer).Write(p)
if err != nil {
wb.span.RecordError(err)
wb.span.SetStatus(codes.Error, err.Error())
}
return n, err
}
func (wb *wrappedBody) Read(b []byte) (int, error) {
n, err := wb.body.Read(b)
// Record the number of bytes read
wb.read.Add(int64(n))
switch err {
case nil:
// nothing to do here but fall through to the return
case io.EOF:
wb.recordBytesRead()
wb.span.End()
default:
wb.span.RecordError(err)
wb.span.SetStatus(codes.Error, err.Error())
}
return n, err
}
// recordBytesRead is a function that ensures the number of bytes read is recorded once and only once.
func (wb *wrappedBody) recordBytesRead() {
// note: it is more performant (and equally correct) to use atomic.Bool over sync.Once here. In the event that
// two goroutines are racing to call this method, the number of bytes read will no longer increase. Using
// CompareAndSwap allows later goroutines to return quickly and not block waiting for the race winner to finish
// calling wb.record(wb.read.Load()).
if wb.recorded.CompareAndSwap(false, true) {
// Record the total number of bytes read
wb.record(wb.read.Load())
}
}
func (wb *wrappedBody) Close() error {
wb.recordBytesRead()
wb.span.End()
if wb.body != nil {
return wb.body.Close()
}
return nil
}

View File

@ -0,0 +1,17 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
// Version is the current release version of the otelhttp instrumentation.
func Version() string {
return "0.53.0"
// This string is updated by the pre_release.sh script during release
}
// SemVersion is the semantic version to be supplied to tracer/meter creation.
//
// Deprecated: Use [Version] instead.
func SemVersion() string {
return Version()
}

View File

@ -0,0 +1,99 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"io"
"net/http"
"sync/atomic"
"go.opentelemetry.io/otel/propagation"
)
var _ io.ReadCloser = &bodyWrapper{}
// bodyWrapper wraps a http.Request.Body (an io.ReadCloser) to track the number
// of bytes read and the last error.
type bodyWrapper struct {
io.ReadCloser
record func(n int64) // must not be nil
read atomic.Int64
err error
}
func (w *bodyWrapper) Read(b []byte) (int, error) {
n, err := w.ReadCloser.Read(b)
n1 := int64(n)
w.read.Add(n1)
w.err = err
w.record(n1)
return n, err
}
func (w *bodyWrapper) Close() error {
return w.ReadCloser.Close()
}
var _ http.ResponseWriter = &respWriterWrapper{}
// respWriterWrapper wraps a http.ResponseWriter in order to track the number of
// bytes written, the last error, and to catch the first written statusCode.
// TODO: The wrapped http.ResponseWriter doesn't implement any of the optional
// types (http.Hijacker, http.Pusher, http.CloseNotifier, http.Flusher, etc)
// that may be useful when using it in real life situations.
type respWriterWrapper struct {
http.ResponseWriter
record func(n int64) // must not be nil
// used to inject the header
ctx context.Context
props propagation.TextMapPropagator
written int64
statusCode int
err error
wroteHeader bool
}
func (w *respWriterWrapper) Header() http.Header {
return w.ResponseWriter.Header()
}
func (w *respWriterWrapper) Write(p []byte) (int, error) {
if !w.wroteHeader {
w.WriteHeader(http.StatusOK)
}
n, err := w.ResponseWriter.Write(p)
n1 := int64(n)
w.record(n1)
w.written += n1
w.err = err
return n, err
}
// WriteHeader persists initial statusCode for span attribution.
// All calls to WriteHeader will be propagated to the underlying ResponseWriter
// and will persist the statusCode from the first call.
// Blocking consecutive calls to WriteHeader alters expected behavior and will
// remove warning logs from net/http where developers will notice incorrect handler implementations.
func (w *respWriterWrapper) WriteHeader(statusCode int) {
if !w.wroteHeader {
w.wroteHeader = true
w.statusCode = statusCode
}
w.ResponseWriter.WriteHeader(statusCode)
}
func (w *respWriterWrapper) Flush() {
if !w.wroteHeader {
w.WriteHeader(http.StatusOK)
}
if f, ok := w.ResponseWriter.(http.Flusher); ok {
f.Flush()
}
}

9
vendor/go.opentelemetry.io/otel/.codespellignore generated vendored Normal file
View File

@ -0,0 +1,9 @@
ot
fo
te
collison
consequentially
ans
nam
valu
thirdparty

10
vendor/go.opentelemetry.io/otel/.codespellrc generated vendored Normal file
View File

@ -0,0 +1,10 @@
# https://github.com/codespell-project/codespell
[codespell]
builtin = clear,rare,informal
check-filenames =
check-hidden =
ignore-words = .codespellignore
interactive = 1
skip = .git,go.mod,go.sum,go.work,go.work.sum,semconv,venv,.tools
uri-ignore-words-list = *
write =

3
vendor/go.opentelemetry.io/otel/.gitattributes generated vendored Normal file
View File

@ -0,0 +1,3 @@
* text=auto eol=lf
*.{cmd,[cC][mM][dD]} text eol=crlf
*.{bat,[bB][aA][tT]} text eol=crlf

22
vendor/go.opentelemetry.io/otel/.gitignore generated vendored Normal file
View File

@ -0,0 +1,22 @@
.DS_Store
Thumbs.db
.tools/
venv/
.idea/
.vscode/
*.iml
*.so
coverage.*
go.work
go.work.sum
gen/
/example/dice/dice
/example/namedtracer/namedtracer
/example/otel-collector/otel-collector
/example/opencensus/opencensus
/example/passthrough/passthrough
/example/prometheus/prometheus
/example/zipkin/zipkin

302
vendor/go.opentelemetry.io/otel/.golangci.yml generated vendored Normal file
View File

@ -0,0 +1,302 @@
# See https://github.com/golangci/golangci-lint#config-file
run:
issues-exit-code: 1 #Default
tests: true #Default
linters:
# Disable everything by default so upgrades to not include new "default
# enabled" linters.
disable-all: true
# Specifically enable linters we want to use.
enable:
- depguard
- errcheck
- errorlint
- godot
- gofumpt
- goimports
- gosec
- gosimple
- govet
- ineffassign
- misspell
- revive
- staticcheck
- tenv
- typecheck
- unconvert
- unused
- unparam
issues:
# Maximum issues count per one linter.
# Set to 0 to disable.
# Default: 50
# Setting to unlimited so the linter only is run once to debug all issues.
max-issues-per-linter: 0
# Maximum count of issues with the same text.
# Set to 0 to disable.
# Default: 3
# Setting to unlimited so the linter only is run once to debug all issues.
max-same-issues: 0
# Excluding configuration per-path, per-linter, per-text and per-source.
exclude-rules:
# TODO: Having appropriate comments for exported objects helps development,
# even for objects in internal packages. Appropriate comments for all
# exported objects should be added and this exclusion removed.
- path: '.*internal/.*'
text: "exported (method|function|type|const) (.+) should have comment or be unexported"
linters:
- revive
# Yes, they are, but it's okay in a test.
- path: _test\.go
text: "exported func.*returns unexported type.*which can be annoying to use"
linters:
- revive
# Example test functions should be treated like main.
- path: example.*_test\.go
text: "calls to (.+) only in main[(][)] or init[(][)] functions"
linters:
- revive
# It's okay to not run gosec in a test.
- path: _test\.go
linters:
- gosec
# Igonoring gosec G404: Use of weak random number generator (math/rand instead of crypto/rand)
# as we commonly use it in tests and examples.
- text: "G404:"
linters:
- gosec
# Igonoring gosec G402: TLS MinVersion too low
# as the https://pkg.go.dev/crypto/tls#Config handles MinVersion default well.
- text: "G402: TLS MinVersion too low."
linters:
- gosec
include:
# revive exported should have comment or be unexported.
- EXC0012
# revive package comment should be of the form ...
- EXC0013
linters-settings:
depguard:
rules:
non-tests:
files:
- "!$test"
- "!**/*test/*.go"
- "!**/internal/matchers/*.go"
deny:
- pkg: "testing"
- pkg: "github.com/stretchr/testify"
- pkg: "crypto/md5"
- pkg: "crypto/sha1"
- pkg: "crypto/**/pkix"
otlp-internal:
files:
- "!**/exporters/otlp/internal/**/*.go"
deny:
- pkg: "go.opentelemetry.io/otel/exporters/otlp/internal"
desc: Do not use cross-module internal packages.
otlptrace-internal:
files:
- "!**/exporters/otlp/otlptrace/*.go"
- "!**/exporters/otlp/otlptrace/internal/**.go"
deny:
- pkg: "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal"
desc: Do not use cross-module internal packages.
otlpmetric-internal:
files:
- "!**/exporters/otlp/otlpmetric/internal/*.go"
- "!**/exporters/otlp/otlpmetric/internal/**/*.go"
deny:
- pkg: "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal"
desc: Do not use cross-module internal packages.
otel-internal:
files:
- "**/sdk/*.go"
- "**/sdk/**/*.go"
- "**/exporters/*.go"
- "**/exporters/**/*.go"
- "**/schema/*.go"
- "**/schema/**/*.go"
- "**/metric/*.go"
- "**/metric/**/*.go"
- "**/bridge/*.go"
- "**/bridge/**/*.go"
- "**/example/*.go"
- "**/example/**/*.go"
- "**/trace/*.go"
- "**/trace/**/*.go"
- "**/log/*.go"
- "**/log/**/*.go"
deny:
- pkg: "go.opentelemetry.io/otel/internal$"
desc: Do not use cross-module internal packages.
- pkg: "go.opentelemetry.io/otel/internal/attribute"
desc: Do not use cross-module internal packages.
- pkg: "go.opentelemetry.io/otel/internal/internaltest"
desc: Do not use cross-module internal packages.
- pkg: "go.opentelemetry.io/otel/internal/matchers"
desc: Do not use cross-module internal packages.
godot:
exclude:
# Exclude links.
- '^ *\[[^]]+\]:'
# Exclude sentence fragments for lists.
- '^[ ]*[-•]'
# Exclude sentences prefixing a list.
- ':$'
goimports:
local-prefixes: go.opentelemetry.io
misspell:
locale: US
ignore-words:
- cancelled
revive:
# Sets the default failure confidence.
# This means that linting errors with less than 0.8 confidence will be ignored.
# Default: 0.8
confidence: 0.01
rules:
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#blank-imports
- name: blank-imports
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#bool-literal-in-expr
- name: bool-literal-in-expr
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#constant-logical-expr
- name: constant-logical-expr
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#context-as-argument
# TODO (#3372) re-enable linter when it is compatible. https://github.com/golangci/golangci-lint/issues/3280
- name: context-as-argument
disabled: true
arguments:
allowTypesBefore: "*testing.T"
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#context-keys-type
- name: context-keys-type
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#deep-exit
- name: deep-exit
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#defer
- name: defer
disabled: false
arguments:
- ["call-chain", "loop"]
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#dot-imports
- name: dot-imports
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#duplicated-imports
- name: duplicated-imports
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#early-return
- name: early-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#empty-block
- name: empty-block
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#empty-lines
- name: empty-lines
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#error-naming
- name: error-naming
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#error-return
- name: error-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#error-strings
- name: error-strings
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#errorf
- name: errorf
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#exported
- name: exported
disabled: false
arguments:
- "sayRepetitiveInsteadOfStutters"
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#flag-parameter
- name: flag-parameter
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#identical-branches
- name: identical-branches
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#if-return
- name: if-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#increment-decrement
- name: increment-decrement
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#indent-error-flow
- name: indent-error-flow
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#import-shadowing
- name: import-shadowing
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#package-comments
- name: package-comments
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#range
- name: range
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#range-val-in-closure
- name: range-val-in-closure
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#range-val-address
- name: range-val-address
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#redefines-builtin-id
- name: redefines-builtin-id
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#string-format
- name: string-format
disabled: false
arguments:
- - panic
- '/^[^\n]*$/'
- must not contain line breaks
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#struct-tag
- name: struct-tag
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#superfluous-else
- name: superfluous-else
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#time-equal
- name: time-equal
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#var-naming
- name: var-naming
disabled: false
arguments:
- ["ID"] # AllowList
- ["Otel", "Aws", "Gcp"] # DenyList
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#var-declaration
- name: var-declaration
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unconditional-recursion
- name: unconditional-recursion
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unexported-return
- name: unexported-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unhandled-error
- name: unhandled-error
disabled: false
arguments:
- "fmt.Fprint"
- "fmt.Fprintf"
- "fmt.Fprintln"
- "fmt.Print"
- "fmt.Printf"
- "fmt.Println"
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unnecessary-stmt
- name: unnecessary-stmt
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#useless-break
- name: useless-break
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#waitgroup-by-value
- name: waitgroup-by-value
disabled: false

6
vendor/go.opentelemetry.io/otel/.lycheeignore generated vendored Normal file
View File

@ -0,0 +1,6 @@
http://localhost
http://jaeger-collector
https://github.com/open-telemetry/opentelemetry-go/milestone/
https://github.com/open-telemetry/opentelemetry-go/projects
file:///home/runner/work/opentelemetry-go/opentelemetry-go/libraries
file:///home/runner/work/opentelemetry-go/opentelemetry-go/manual

29
vendor/go.opentelemetry.io/otel/.markdownlint.yaml generated vendored Normal file
View File

@ -0,0 +1,29 @@
# Default state for all rules
default: true
# ul-style
MD004: false
# hard-tabs
MD010: false
# line-length
MD013: false
# no-duplicate-header
MD024:
siblings_only: true
#single-title
MD025: false
# ol-prefix
MD029:
style: ordered
# no-inline-html
MD033: false
# fenced-code-language
MD040: false

3099
vendor/go.opentelemetry.io/otel/CHANGELOG.md generated vendored Normal file

File diff suppressed because it is too large Load Diff

17
vendor/go.opentelemetry.io/otel/CODEOWNERS generated vendored Normal file
View File

@ -0,0 +1,17 @@
#####################################################
#
# List of approvers for this repository
#
#####################################################
#
# Learn about membership in OpenTelemetry community:
# https://github.com/open-telemetry/community/blob/main/community-membership.md
#
#
# Learn about CODEOWNERS file format:
# https://help.github.com/en/articles/about-code-owners
#
* @MrAlias @XSAM @dashpole @MadVikingGod @pellared @hanyuancheung @dmathieu
CODEOWNERS @MrAlias @MadVikingGod @pellared @dashpole @XSAM @dmathieu

658
vendor/go.opentelemetry.io/otel/CONTRIBUTING.md generated vendored Normal file
View File

@ -0,0 +1,658 @@
# Contributing to opentelemetry-go
The Go special interest group (SIG) meets regularly. See the
OpenTelemetry
[community](https://github.com/open-telemetry/community#golang-sdk)
repo for information on this and other language SIGs.
See the [public meeting
notes](https://docs.google.com/document/d/1E5e7Ld0NuU1iVvf-42tOBpu2VBBLYnh73GJuITGJTTU/edit)
for a summary description of past meetings. To request edit access,
join the meeting or get in touch on
[Slack](https://cloud-native.slack.com/archives/C01NPAXACKT).
## Development
You can view and edit the source code by cloning this repository:
```sh
git clone https://github.com/open-telemetry/opentelemetry-go.git
```
Run `make test` to run the tests instead of `go test`.
There are some generated files checked into the repo. To make sure
that the generated files are up-to-date, run `make` (or `make
precommit` - the `precommit` target is the default).
The `precommit` target also fixes the formatting of the code and
checks the status of the go module files.
Additionally, there is a `codespell` target that checks for common
typos in the code. It is not run by default, but you can run it
manually with `make codespell`. It will set up a virtual environment
in `venv` and install `codespell` there.
If after running `make precommit` the output of `git status` contains
`nothing to commit, working tree clean` then it means that everything
is up-to-date and properly formatted.
## Pull Requests
### How to Send Pull Requests
Everyone is welcome to contribute code to `opentelemetry-go` via
GitHub pull requests (PRs).
To create a new PR, fork the project in GitHub and clone the upstream
repo:
```sh
go get -d go.opentelemetry.io/otel
```
(This may print some warning about "build constraints exclude all Go
files", just ignore it.)
This will put the project in `${GOPATH}/src/go.opentelemetry.io/otel`. You
can alternatively use `git` directly with:
```sh
git clone https://github.com/open-telemetry/opentelemetry-go
```
(Note that `git clone` is *not* using the `go.opentelemetry.io/otel` name -
that name is a kind of a redirector to GitHub that `go get` can
understand, but `git` does not.)
This would put the project in the `opentelemetry-go` directory in
current working directory.
Enter the newly created directory and add your fork as a new remote:
```sh
git remote add <YOUR_FORK> git@github.com:<YOUR_GITHUB_USERNAME>/opentelemetry-go
```
Check out a new branch, make modifications, run linters and tests, update
`CHANGELOG.md`, and push the branch to your fork:
```sh
git checkout -b <YOUR_BRANCH_NAME>
# edit files
# update changelog
make precommit
git add -p
git commit
git push <YOUR_FORK> <YOUR_BRANCH_NAME>
```
Open a pull request against the main `opentelemetry-go` repo. Be sure to add the pull
request ID to the entry you added to `CHANGELOG.md`.
Avoid rebasing and force-pushing to your branch to facilitate reviewing the pull request.
Rewriting Git history makes it difficult to keep track of iterations during code review.
All pull requests are squashed to a single commit upon merge to `main`.
### How to Receive Comments
* If the PR is not ready for review, please put `[WIP]` in the title,
tag it as `work-in-progress`, or mark it as
[`draft`](https://github.blog/2019-02-14-introducing-draft-pull-requests/).
* Make sure CLA is signed and CI is clear.
### How to Get PRs Merged
A PR is considered **ready to merge** when:
* It has received two qualified approvals[^1].
This is not enforced through automation, but needs to be validated by the
maintainer merging.
* The qualified approvals need to be from [Approver]s/[Maintainer]s
affiliated with different companies. Two qualified approvals from
[Approver]s or [Maintainer]s affiliated with the same company counts as a
single qualified approval.
* PRs introducing changes that have already been discussed and consensus
reached only need one qualified approval. The discussion and resolution
needs to be linked to the PR.
* Trivial changes[^2] only need one qualified approval.
* All feedback has been addressed.
* All PR comments and suggestions are resolved.
* All GitHub Pull Request reviews with a status of "Request changes" have
been addressed. Another review by the objecting reviewer with a different
status can be submitted to clear the original review, or the review can be
dismissed by a [Maintainer] when the issues from the original review have
been addressed.
* Any comments or reviews that cannot be resolved between the PR author and
reviewers can be submitted to the community [Approver]s and [Maintainer]s
during the weekly SIG meeting. If consensus is reached among the
[Approver]s and [Maintainer]s during the SIG meeting the objections to the
PR may be dismissed or resolved or the PR closed by a [Maintainer].
* Any substantive changes to the PR require existing Approval reviews be
cleared unless the approver explicitly states that their approval persists
across changes. This includes changes resulting from other feedback.
[Approver]s and [Maintainer]s can help in clearing reviews and they should
be consulted if there are any questions.
* The PR branch is up to date with the base branch it is merging into.
* To ensure this does not block the PR, it should be configured to allow
maintainers to update it.
* It has been open for review for at least one working day. This gives people
reasonable time to review.
* Trivial changes[^2] do not have to wait for one day and may be merged with
a single [Maintainer]'s approval.
* All required GitHub workflows have succeeded.
* Urgent fix can take exception as long as it has been actively communicated
among [Maintainer]s.
Any [Maintainer] can merge the PR once the above criteria have been met.
[^1]: A qualified approval is a GitHub Pull Request review with "Approve"
status from an OpenTelemetry Go [Approver] or [Maintainer].
[^2]: Trivial changes include: typo corrections, cosmetic non-substantive
changes, documentation corrections or updates, dependency updates, etc.
## Design Choices
As with other OpenTelemetry clients, opentelemetry-go follows the
[OpenTelemetry Specification](https://opentelemetry.io/docs/specs/otel).
It's especially valuable to read through the [library
guidelines](https://opentelemetry.io/docs/specs/otel/library-guidelines).
### Focus on Capabilities, Not Structure Compliance
OpenTelemetry is an evolving specification, one where the desires and
use cases are clear, but the method to satisfy those uses cases are
not.
As such, Contributions should provide functionality and behavior that
conforms to the specification, but the interface and structure is
flexible.
It is preferable to have contributions follow the idioms of the
language rather than conform to specific API names or argument
patterns in the spec.
For a deeper discussion, see
[this](https://github.com/open-telemetry/opentelemetry-specification/issues/165).
## Documentation
Each (non-internal, non-test) package must be documented using
[Go Doc Comments](https://go.dev/doc/comment),
preferably in a `doc.go` file.
Prefer using [Examples](https://pkg.go.dev/testing#hdr-Examples)
instead of putting code snippets in Go doc comments.
In some cases, you can even create [Testable Examples](https://go.dev/blog/examples).
You can install and run a "local Go Doc site" in the following way:
```sh
go install golang.org/x/pkgsite/cmd/pkgsite@latest
pkgsite
```
[`go.opentelemetry.io/otel/metric`](https://pkg.go.dev/go.opentelemetry.io/otel/metric)
is an example of a very well-documented package.
### README files
Each (non-internal, non-test, non-documentation) package must contain a
`README.md` file containing at least a title, and a `pkg.go.dev` badge.
The README should not be a repetition of Go doc comments.
You can verify the presence of all README files with the `make verify-readmes`
command.
## Style Guide
One of the primary goals of this project is that it is actually used by
developers. With this goal in mind the project strives to build
user-friendly and idiomatic Go code adhering to the Go community's best
practices.
For a non-comprehensive but foundational overview of these best practices
the [Effective Go](https://golang.org/doc/effective_go.html) documentation
is an excellent starting place.
As a convenience for developers building this project the `make precommit`
will format, lint, validate, and in some cases fix the changes you plan to
submit. This check will need to pass for your changes to be able to be
merged.
In addition to idiomatic Go, the project has adopted certain standards for
implementations of common patterns. These standards should be followed as a
default, and if they are not followed documentation needs to be included as
to the reasons why.
### Configuration
When creating an instantiation function for a complex `type T struct`, it is
useful to allow variable number of options to be applied. However, the strong
type system of Go restricts the function design options. There are a few ways
to solve this problem, but we have landed on the following design.
#### `config`
Configuration should be held in a `struct` named `config`, or prefixed with
specific type name this Configuration applies to if there are multiple
`config` in the package. This type must contain configuration options.
```go
// config contains configuration options for a thing.
type config struct {
// options ...
}
```
In general the `config` type will not need to be used externally to the
package and should be unexported. If, however, it is expected that the user
will likely want to build custom options for the configuration, the `config`
should be exported. Please, include in the documentation for the `config`
how the user can extend the configuration.
It is important that internal `config` are not shared across package boundaries.
Meaning a `config` from one package should not be directly used by another. The
one exception is the API packages. The configs from the base API, eg.
`go.opentelemetry.io/otel/trace.TracerConfig` and
`go.opentelemetry.io/otel/metric.InstrumentConfig`, are intended to be consumed
by the SDK therefore it is expected that these are exported.
When a config is exported we want to maintain forward and backward
compatibility, to achieve this no fields should be exported but should
instead be accessed by methods.
Optionally, it is common to include a `newConfig` function (with the same
naming scheme). This function wraps any defaults setting and looping over
all options to create a configured `config`.
```go
// newConfig returns an appropriately configured config.
func newConfig(options ...Option) config {
// Set default values for config.
config := config{/* […] */}
for _, option := range options {
config = option.apply(config)
}
// Perform any validation here.
return config
}
```
If validation of the `config` options is also performed this can return an
error as well that is expected to be handled by the instantiation function
or propagated to the user.
Given the design goal of not having the user need to work with the `config`,
the `newConfig` function should also be unexported.
#### `Option`
To set the value of the options a `config` contains, a corresponding
`Option` interface type should be used.
```go
type Option interface {
apply(config) config
}
```
Having `apply` unexported makes sure that it will not be used externally.
Moreover, the interface becomes sealed so the user cannot easily implement
the interface on its own.
The `apply` method should return a modified version of the passed config.
This approach, instead of passing a pointer, is used to prevent the config from being allocated to the heap.
The name of the interface should be prefixed in the same way the
corresponding `config` is (if at all).
#### Options
All user configurable options for a `config` must have a related unexported
implementation of the `Option` interface and an exported configuration
function that wraps this implementation.
The wrapping function name should be prefixed with `With*` (or in the
special case of a boolean options `Without*`) and should have the following
function signature.
```go
func With*(…) Option { … }
```
##### `bool` Options
```go
type defaultFalseOption bool
func (o defaultFalseOption) apply(c config) config {
c.Bool = bool(o)
return c
}
// WithOption sets a T to have an option included.
func WithOption() Option {
return defaultFalseOption(true)
}
```
```go
type defaultTrueOption bool
func (o defaultTrueOption) apply(c config) config {
c.Bool = bool(o)
return c
}
// WithoutOption sets a T to have Bool option excluded.
func WithoutOption() Option {
return defaultTrueOption(false)
}
```
##### Declared Type Options
```go
type myTypeOption struct {
MyType MyType
}
func (o myTypeOption) apply(c config) config {
c.MyType = o.MyType
return c
}
// WithMyType sets T to have include MyType.
func WithMyType(t MyType) Option {
return myTypeOption{t}
}
```
##### Functional Options
```go
type optionFunc func(config) config
func (fn optionFunc) apply(c config) config {
return fn(c)
}
// WithMyType sets t as MyType.
func WithMyType(t MyType) Option {
return optionFunc(func(c config) config {
c.MyType = t
return c
})
}
```
#### Instantiation
Using this configuration pattern to configure instantiation with a `NewT`
function.
```go
func NewT(options ...Option) T {…}
```
Any required parameters can be declared before the variadic `options`.
#### Dealing with Overlap
Sometimes there are multiple complex `struct` that share common
configuration and also have distinct configuration. To avoid repeated
portions of `config`s, a common `config` can be used with the union of
options being handled with the `Option` interface.
For example.
```go
// config holds options for all animals.
type config struct {
Weight float64
Color string
MaxAltitude float64
}
// DogOption apply Dog specific options.
type DogOption interface {
applyDog(config) config
}
// BirdOption apply Bird specific options.
type BirdOption interface {
applyBird(config) config
}
// Option apply options for all animals.
type Option interface {
BirdOption
DogOption
}
type weightOption float64
func (o weightOption) applyDog(c config) config {
c.Weight = float64(o)
return c
}
func (o weightOption) applyBird(c config) config {
c.Weight = float64(o)
return c
}
func WithWeight(w float64) Option { return weightOption(w) }
type furColorOption string
func (o furColorOption) applyDog(c config) config {
c.Color = string(o)
return c
}
func WithFurColor(c string) DogOption { return furColorOption(c) }
type maxAltitudeOption float64
func (o maxAltitudeOption) applyBird(c config) config {
c.MaxAltitude = float64(o)
return c
}
func WithMaxAltitude(a float64) BirdOption { return maxAltitudeOption(a) }
func NewDog(name string, o ...DogOption) Dog {…}
func NewBird(name string, o ...BirdOption) Bird {…}
```
### Interfaces
To allow other developers to better comprehend the code, it is important
to ensure it is sufficiently documented. One simple measure that contributes
to this aim is self-documenting by naming method parameters. Therefore,
where appropriate, methods of every exported interface type should have
their parameters appropriately named.
#### Interface Stability
All exported stable interfaces that include the following warning in their
documentation are allowed to be extended with additional methods.
> Warning: methods may be added to this interface in minor releases.
These interfaces are defined by the OpenTelemetry specification and will be
updated as the specification evolves.
Otherwise, stable interfaces MUST NOT be modified.
#### How to Change Specification Interfaces
When an API change must be made, we will update the SDK with the new method one
release before the API change. This will allow the SDK one version before the
API change to work seamlessly with the new API.
If an incompatible version of the SDK is used with the new API the application
will fail to compile.
#### How Not to Change Specification Interfaces
We have explored using a v2 of the API to change interfaces and found that there
was no way to introduce a v2 and have it work seamlessly with the v1 of the API.
Problems happened with libraries that upgraded to v2 when an application did not,
and would not produce any telemetry.
More detail of the approaches considered and their limitations can be found in
the [Use a V2 API to evolve interfaces](https://github.com/open-telemetry/opentelemetry-go/issues/3920)
issue.
#### How to Change Other Interfaces
If new functionality is needed for an interface that cannot be changed it MUST
be added by including an additional interface. That added interface can be a
simple interface for the specific functionality that you want to add or it can
be a super-set of the original interface. For example, if you wanted to a
`Close` method to the `Exporter` interface:
```go
type Exporter interface {
Export()
}
```
A new interface, `Closer`, can be added:
```go
type Closer interface {
Close()
}
```
Code that is passed the `Exporter` interface can now check to see if the passed
value also satisfies the new interface. E.g.
```go
func caller(e Exporter) {
/* ... */
if c, ok := e.(Closer); ok {
c.Close()
}
/* ... */
}
```
Alternatively, a new type that is the super-set of an `Exporter` can be created.
```go
type ClosingExporter struct {
Exporter
Close()
}
```
This new type can be used similar to the simple interface above in that a
passed `Exporter` type can be asserted to satisfy the `ClosingExporter` type
and the `Close` method called.
This super-set approach can be useful if there is explicit behavior that needs
to be coupled with the original type and passed as a unified type to a new
function, but, because of this coupling, it also limits the applicability of
the added functionality. If there exist other interfaces where this
functionality should be added, each one will need their own super-set
interfaces and will duplicate the pattern. For this reason, the simple targeted
interface that defines the specific functionality should be preferred.
See also:
[Keeping Your Modules Compatible: Working with interfaces](https://go.dev/blog/module-compatibility#working-with-interfaces).
### Testing
The tests should never leak goroutines.
Use the term `ConcurrentSafe` in the test name when it aims to verify the
absence of race conditions.
### Internal packages
The use of internal packages should be scoped to a single module. A sub-module
should never import from a parent internal package. This creates a coupling
between the two modules where a user can upgrade the parent without the child
and if the internal package API has changed it will fail to upgrade[^3].
There are two known exceptions to this rule:
- `go.opentelemetry.io/otel/internal/global`
- This package manages global state for all of opentelemetry-go. It needs to
be a single package in order to ensure the uniqueness of the global state.
- `go.opentelemetry.io/otel/internal/baggage`
- This package provides values in a `context.Context` that need to be
recognized by `go.opentelemetry.io/otel/baggage` and
`go.opentelemetry.io/otel/bridge/opentracing` but remain private.
If you have duplicate code in multiple modules, make that code into a Go
template stored in `go.opentelemetry.io/otel/internal/shared` and use [gotmpl]
to render the templates in the desired locations. See [#4404] for an example of
this.
[^3]: https://github.com/open-telemetry/opentelemetry-go/issues/3548
### Ignoring context cancellation
OpenTelemetry API implementations need to ignore the cancellation of the context that are
passed when recording a value (e.g. starting a span, recording a measurement, emitting a log).
Recording methods should not return an error describing the cancellation state of the context
when they complete, nor should they abort any work.
This rule may not apply if the OpenTelemetry specification defines a timeout mechanism for
the method. In that case the context cancellation can be used for the timeout with the
restriction that this behavior is documented for the method. Otherwise, timeouts
are expected to be handled by the user calling the API, not the implementation.
Stoppage of the telemetry pipeline is handled by calling the appropriate `Shutdown` method
of a provider. It is assumed the context passed from a user is not used for this purpose.
Outside of the direct recording of telemetry from the API (e.g. exporting telemetry,
force flushing telemetry, shutting down a signal provider) the context cancellation
should be honored. This means all work done on behalf of the user provided context
should be canceled.
## Approvers and Maintainers
### Approvers
- [Chester Cheung](https://github.com/hanyuancheung), Tencent
### Maintainers
- [Aaron Clawson](https://github.com/MadVikingGod), LightStep
- [Damien Mathieu](https://github.com/dmathieu), Elastic
- [David Ashpole](https://github.com/dashpole), Google
- [Robert Pająk](https://github.com/pellared), Splunk
- [Sam Xie](https://github.com/XSAM), Cisco/AppDynamics
- [Tyler Yahn](https://github.com/MrAlias), Splunk
### Emeritus
- [Liz Fong-Jones](https://github.com/lizthegrey), Honeycomb
- [Gustavo Silva Paiva](https://github.com/paivagustavo), LightStep
- [Josh MacDonald](https://github.com/jmacd), LightStep
- [Anthony Mirabella](https://github.com/Aneurysm9), AWS
- [Evan Torrie](https://github.com/evantorrie), Yahoo
### Become an Approver or a Maintainer
See the [community membership document in OpenTelemetry community
repo](https://github.com/open-telemetry/community/blob/main/community-membership.md).
[Approver]: #approvers
[Maintainer]: #maintainers
[gotmpl]: https://pkg.go.dev/go.opentelemetry.io/build-tools/gotmpl
[#4404]: https://github.com/open-telemetry/opentelemetry-go/pull/4404

201
vendor/go.opentelemetry.io/otel/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

301
vendor/go.opentelemetry.io/otel/Makefile generated vendored Normal file
View File

@ -0,0 +1,301 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
TOOLS_MOD_DIR := ./internal/tools
ALL_DOCS := $(shell find . -name '*.md' -type f | sort)
ALL_GO_MOD_DIRS := $(shell find . -type f -name 'go.mod' -exec dirname {} \; | sort)
OTEL_GO_MOD_DIRS := $(filter-out $(TOOLS_MOD_DIR), $(ALL_GO_MOD_DIRS))
ALL_COVERAGE_MOD_DIRS := $(shell find . -type f -name 'go.mod' -exec dirname {} \; | grep -E -v '^./example|^$(TOOLS_MOD_DIR)' | sort)
GO = go
TIMEOUT = 60
.DEFAULT_GOAL := precommit
.PHONY: precommit ci
precommit: generate license-check misspell go-mod-tidy golangci-lint-fix verify-readmes verify-mods test-default
ci: generate license-check lint vanity-import-check verify-readmes verify-mods build test-default check-clean-work-tree test-coverage
# Tools
TOOLS = $(CURDIR)/.tools
$(TOOLS):
@mkdir -p $@
$(TOOLS)/%: $(TOOLS_MOD_DIR)/go.mod | $(TOOLS)
cd $(TOOLS_MOD_DIR) && \
$(GO) build -o $@ $(PACKAGE)
MULTIMOD = $(TOOLS)/multimod
$(TOOLS)/multimod: PACKAGE=go.opentelemetry.io/build-tools/multimod
SEMCONVGEN = $(TOOLS)/semconvgen
$(TOOLS)/semconvgen: PACKAGE=go.opentelemetry.io/build-tools/semconvgen
CROSSLINK = $(TOOLS)/crosslink
$(TOOLS)/crosslink: PACKAGE=go.opentelemetry.io/build-tools/crosslink
SEMCONVKIT = $(TOOLS)/semconvkit
$(TOOLS)/semconvkit: PACKAGE=go.opentelemetry.io/otel/$(TOOLS_MOD_DIR)/semconvkit
GOLANGCI_LINT = $(TOOLS)/golangci-lint
$(TOOLS)/golangci-lint: PACKAGE=github.com/golangci/golangci-lint/cmd/golangci-lint
MISSPELL = $(TOOLS)/misspell
$(TOOLS)/misspell: PACKAGE=github.com/client9/misspell/cmd/misspell
GOCOVMERGE = $(TOOLS)/gocovmerge
$(TOOLS)/gocovmerge: PACKAGE=github.com/wadey/gocovmerge
STRINGER = $(TOOLS)/stringer
$(TOOLS)/stringer: PACKAGE=golang.org/x/tools/cmd/stringer
PORTO = $(TOOLS)/porto
$(TOOLS)/porto: PACKAGE=github.com/jcchavezs/porto/cmd/porto
GOJQ = $(TOOLS)/gojq
$(TOOLS)/gojq: PACKAGE=github.com/itchyny/gojq/cmd/gojq
GOTMPL = $(TOOLS)/gotmpl
$(GOTMPL): PACKAGE=go.opentelemetry.io/build-tools/gotmpl
GORELEASE = $(TOOLS)/gorelease
$(GORELEASE): PACKAGE=golang.org/x/exp/cmd/gorelease
GOVULNCHECK = $(TOOLS)/govulncheck
$(TOOLS)/govulncheck: PACKAGE=golang.org/x/vuln/cmd/govulncheck
.PHONY: tools
tools: $(CROSSLINK) $(GOLANGCI_LINT) $(MISSPELL) $(GOCOVMERGE) $(STRINGER) $(PORTO) $(GOJQ) $(SEMCONVGEN) $(MULTIMOD) $(SEMCONVKIT) $(GOTMPL) $(GORELEASE)
# Virtualized python tools via docker
# The directory where the virtual environment is created.
VENVDIR := venv
# The directory where the python tools are installed.
PYTOOLS := $(VENVDIR)/bin
# The pip executable in the virtual environment.
PIP := $(PYTOOLS)/pip
# The directory in the docker image where the current directory is mounted.
WORKDIR := /workdir
# The python image to use for the virtual environment.
PYTHONIMAGE := python:3.11.3-slim-bullseye
# Run the python image with the current directory mounted.
DOCKERPY := docker run --rm -v "$(CURDIR):$(WORKDIR)" -w $(WORKDIR) $(PYTHONIMAGE)
# Create a virtual environment for Python tools.
$(PYTOOLS):
# The `--upgrade` flag is needed to ensure that the virtual environment is
# created with the latest pip version.
@$(DOCKERPY) bash -c "python3 -m venv $(VENVDIR) && $(PIP) install --upgrade pip"
# Install python packages into the virtual environment.
$(PYTOOLS)/%: $(PYTOOLS)
@$(DOCKERPY) $(PIP) install -r requirements.txt
CODESPELL = $(PYTOOLS)/codespell
$(CODESPELL): PACKAGE=codespell
# Generate
.PHONY: generate
generate: go-generate vanity-import-fix
.PHONY: go-generate
go-generate: $(OTEL_GO_MOD_DIRS:%=go-generate/%)
go-generate/%: DIR=$*
go-generate/%: $(STRINGER) $(GOTMPL)
@echo "$(GO) generate $(DIR)/..." \
&& cd $(DIR) \
&& PATH="$(TOOLS):$${PATH}" $(GO) generate ./...
.PHONY: vanity-import-fix
vanity-import-fix: $(PORTO)
@$(PORTO) --include-internal -w .
# Generate go.work file for local development.
.PHONY: go-work
go-work: $(CROSSLINK)
$(CROSSLINK) work --root=$(shell pwd)
# Build
.PHONY: build
build: $(OTEL_GO_MOD_DIRS:%=build/%) $(OTEL_GO_MOD_DIRS:%=build-tests/%)
build/%: DIR=$*
build/%:
@echo "$(GO) build $(DIR)/..." \
&& cd $(DIR) \
&& $(GO) build ./...
build-tests/%: DIR=$*
build-tests/%:
@echo "$(GO) build tests $(DIR)/..." \
&& cd $(DIR) \
&& $(GO) list ./... \
| grep -v third_party \
| xargs $(GO) test -vet=off -run xxxxxMatchNothingxxxxx >/dev/null
# Tests
TEST_TARGETS := test-default test-bench test-short test-verbose test-race
.PHONY: $(TEST_TARGETS) test
test-default test-race: ARGS=-race
test-bench: ARGS=-run=xxxxxMatchNothingxxxxx -test.benchtime=1ms -bench=.
test-short: ARGS=-short
test-verbose: ARGS=-v -race
$(TEST_TARGETS): test
test: $(OTEL_GO_MOD_DIRS:%=test/%)
test/%: DIR=$*
test/%:
@echo "$(GO) test -timeout $(TIMEOUT)s $(ARGS) $(DIR)/..." \
&& cd $(DIR) \
&& $(GO) list ./... \
| grep -v third_party \
| xargs $(GO) test -timeout $(TIMEOUT)s $(ARGS)
COVERAGE_MODE = atomic
COVERAGE_PROFILE = coverage.out
.PHONY: test-coverage
test-coverage: $(GOCOVMERGE)
@set -e; \
printf "" > coverage.txt; \
for dir in $(ALL_COVERAGE_MOD_DIRS); do \
echo "$(GO) test -coverpkg=go.opentelemetry.io/otel/... -covermode=$(COVERAGE_MODE) -coverprofile="$(COVERAGE_PROFILE)" $${dir}/..."; \
(cd "$${dir}" && \
$(GO) list ./... \
| grep -v third_party \
| grep -v 'semconv/v.*' \
| xargs $(GO) test -coverpkg=./... -covermode=$(COVERAGE_MODE) -coverprofile="$(COVERAGE_PROFILE)" && \
$(GO) tool cover -html=coverage.out -o coverage.html); \
done; \
$(GOCOVMERGE) $$(find . -name coverage.out) > coverage.txt
# Adding a directory will include all benchmarks in that directory if a filter is not specified.
BENCHMARK_TARGETS := sdk/trace
.PHONY: benchmark
benchmark: $(BENCHMARK_TARGETS:%=benchmark/%)
BENCHMARK_FILTER = .
# You can override the filter for a particular directory by adding a rule here.
benchmark/sdk/trace: BENCHMARK_FILTER = SpanWithAttributes_8/AlwaysSample
benchmark/%:
@echo "$(GO) test -timeout $(TIMEOUT)s -run=xxxxxMatchNothingxxxxx -bench=$(BENCHMARK_FILTER) $*..." \
&& cd $* \
$(foreach filter, $(BENCHMARK_FILTER), && $(GO) test -timeout $(TIMEOUT)s -run=xxxxxMatchNothingxxxxx -bench=$(filter))
.PHONY: golangci-lint golangci-lint-fix
golangci-lint-fix: ARGS=--fix
golangci-lint-fix: golangci-lint
golangci-lint: $(OTEL_GO_MOD_DIRS:%=golangci-lint/%)
golangci-lint/%: DIR=$*
golangci-lint/%: $(GOLANGCI_LINT)
@echo 'golangci-lint $(if $(ARGS),$(ARGS) ,)$(DIR)' \
&& cd $(DIR) \
&& $(GOLANGCI_LINT) run --allow-serial-runners $(ARGS)
.PHONY: crosslink
crosslink: $(CROSSLINK)
@echo "Updating intra-repository dependencies in all go modules" \
&& $(CROSSLINK) --root=$(shell pwd) --prune
.PHONY: go-mod-tidy
go-mod-tidy: $(ALL_GO_MOD_DIRS:%=go-mod-tidy/%)
go-mod-tidy/%: DIR=$*
go-mod-tidy/%: crosslink
@echo "$(GO) mod tidy in $(DIR)" \
&& cd $(DIR) \
&& $(GO) mod tidy -compat=1.21
.PHONY: lint-modules
lint-modules: go-mod-tidy
.PHONY: lint
lint: misspell lint-modules golangci-lint govulncheck
.PHONY: vanity-import-check
vanity-import-check: $(PORTO)
@$(PORTO) --include-internal -l . || ( echo "(run: make vanity-import-fix)"; exit 1 )
.PHONY: misspell
misspell: $(MISSPELL)
@$(MISSPELL) -w $(ALL_DOCS)
.PHONY: govulncheck
govulncheck: $(OTEL_GO_MOD_DIRS:%=govulncheck/%)
govulncheck/%: DIR=$*
govulncheck/%: $(GOVULNCHECK)
@echo "govulncheck ./... in $(DIR)" \
&& cd $(DIR) \
&& $(GOVULNCHECK) ./...
.PHONY: codespell
codespell: $(CODESPELL)
@$(DOCKERPY) $(CODESPELL)
.PHONY: license-check
license-check:
@licRes=$$(for f in $$(find . -type f \( -iname '*.go' -o -iname '*.sh' \) ! -path '**/third_party/*' ! -path './.git/*' ) ; do \
awk '/Copyright The OpenTelemetry Authors|generated|GENERATED/ && NR<=4 { found=1; next } END { if (!found) print FILENAME }' $$f; \
done); \
if [ -n "$${licRes}" ]; then \
echo "license header checking failed:"; echo "$${licRes}"; \
exit 1; \
fi
.PHONY: check-clean-work-tree
check-clean-work-tree:
@if ! git diff --quiet; then \
echo; \
echo 'Working tree is not clean, did you forget to run "make precommit"?'; \
echo; \
git status; \
exit 1; \
fi
SEMCONVPKG ?= "semconv/"
.PHONY: semconv-generate
semconv-generate: $(SEMCONVGEN) $(SEMCONVKIT)
[ "$(TAG)" ] || ( echo "TAG unset: missing opentelemetry semantic-conventions tag"; exit 1 )
[ "$(OTEL_SEMCONV_REPO)" ] || ( echo "OTEL_SEMCONV_REPO unset: missing path to opentelemetry semantic-conventions repo"; exit 1 )
$(SEMCONVGEN) -i "$(OTEL_SEMCONV_REPO)/model/." --only=attribute_group -p conventionType=trace -f attribute_group.go -t "$(SEMCONVPKG)/template.j2" -s "$(TAG)"
$(SEMCONVGEN) -i "$(OTEL_SEMCONV_REPO)/model/." --only=metric -f metric.go -t "$(SEMCONVPKG)/metric_template.j2" -s "$(TAG)"
$(SEMCONVKIT) -output "$(SEMCONVPKG)/$(TAG)" -tag "$(TAG)"
.PHONY: gorelease
gorelease: $(OTEL_GO_MOD_DIRS:%=gorelease/%)
gorelease/%: DIR=$*
gorelease/%:| $(GORELEASE)
@echo "gorelease in $(DIR):" \
&& cd $(DIR) \
&& $(GORELEASE) \
|| echo ""
.PHONY: verify-mods
verify-mods: $(MULTIMOD)
$(MULTIMOD) verify
.PHONY: prerelease
prerelease: verify-mods
@[ "${MODSET}" ] || ( echo ">> env var MODSET is not set"; exit 1 )
$(MULTIMOD) prerelease -m ${MODSET}
COMMIT ?= "HEAD"
.PHONY: add-tags
add-tags: verify-mods
@[ "${MODSET}" ] || ( echo ">> env var MODSET is not set"; exit 1 )
$(MULTIMOD) tag -m ${MODSET} -c ${COMMIT}
.PHONY: lint-markdown
lint-markdown:
docker run -v "$(CURDIR):$(WORKDIR)" avtodev/markdown-lint:v1 -c $(WORKDIR)/.markdownlint.yaml $(WORKDIR)/**/*.md
.PHONY: verify-readmes
verify-readmes:
./verify_readmes.sh

109
vendor/go.opentelemetry.io/otel/README.md generated vendored Normal file
View File

@ -0,0 +1,109 @@
# OpenTelemetry-Go
[![CI](https://github.com/open-telemetry/opentelemetry-go/workflows/ci/badge.svg)](https://github.com/open-telemetry/opentelemetry-go/actions?query=workflow%3Aci+branch%3Amain)
[![codecov.io](https://codecov.io/gh/open-telemetry/opentelemetry-go/coverage.svg?branch=main)](https://app.codecov.io/gh/open-telemetry/opentelemetry-go?branch=main)
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel)](https://pkg.go.dev/go.opentelemetry.io/otel)
[![Go Report Card](https://goreportcard.com/badge/go.opentelemetry.io/otel)](https://goreportcard.com/report/go.opentelemetry.io/otel)
[![Slack](https://img.shields.io/badge/slack-@cncf/otel--go-brightgreen.svg?logo=slack)](https://cloud-native.slack.com/archives/C01NPAXACKT)
OpenTelemetry-Go is the [Go](https://golang.org/) implementation of [OpenTelemetry](https://opentelemetry.io/).
It provides a set of APIs to directly measure performance and behavior of your software and send this data to observability platforms.
## Project Status
| Signal | Status |
|---------|--------------------|
| Traces | Stable |
| Metrics | Stable |
| Logs | Beta[^1] |
Progress and status specific to this repository is tracked in our
[project boards](https://github.com/open-telemetry/opentelemetry-go/projects)
and
[milestones](https://github.com/open-telemetry/opentelemetry-go/milestones).
Project versioning information and stability guarantees can be found in the
[versioning documentation](VERSIONING.md).
[^1]: https://github.com/orgs/open-telemetry/projects/43
### Compatibility
OpenTelemetry-Go ensures compatibility with the current supported versions of
the [Go language](https://golang.org/doc/devel/release#policy):
> Each major Go release is supported until there are two newer major releases.
> For example, Go 1.5 was supported until the Go 1.7 release, and Go 1.6 was supported until the Go 1.8 release.
For versions of Go that are no longer supported upstream, opentelemetry-go will
stop ensuring compatibility with these versions in the following manner:
- A minor release of opentelemetry-go will be made to add support for the new
supported release of Go.
- The following minor release of opentelemetry-go will remove compatibility
testing for the oldest (now archived upstream) version of Go. This, and
future, releases of opentelemetry-go may include features only supported by
the currently supported versions of Go.
Currently, this project supports the following environments.
| OS | Go Version | Architecture |
|---------|------------|--------------|
| Ubuntu | 1.22 | amd64 |
| Ubuntu | 1.21 | amd64 |
| Ubuntu | 1.22 | 386 |
| Ubuntu | 1.21 | 386 |
| Linux | 1.22 | arm64 |
| Linux | 1.21 | arm64 |
| MacOS | 1.22 | amd64 |
| MacOS | 1.21 | amd64 |
| Windows | 1.22 | amd64 |
| Windows | 1.21 | amd64 |
| Windows | 1.22 | 386 |
| Windows | 1.21 | 386 |
While this project should work for other systems, no compatibility guarantees
are made for those systems currently.
## Getting Started
You can find a getting started guide on [opentelemetry.io](https://opentelemetry.io/docs/languages/go/getting-started/).
OpenTelemetry's goal is to provide a single set of APIs to capture distributed
traces and metrics from your application and send them to an observability
platform. This project allows you to do just that for applications written in
Go. There are two steps to this process: instrument your application, and
configure an exporter.
### Instrumentation
To start capturing distributed traces and metric events from your application
it first needs to be instrumented. The easiest way to do this is by using an
instrumentation library for your code. Be sure to check out [the officially
supported instrumentation
libraries](https://github.com/open-telemetry/opentelemetry-go-contrib/tree/main/instrumentation).
If you need to extend the telemetry an instrumentation library provides or want
to build your own instrumentation for your application directly you will need
to use the
[Go otel](https://pkg.go.dev/go.opentelemetry.io/otel)
package. The included [examples](./example/) are a good way to see some
practical uses of this process.
### Export
Now that your application is instrumented to collect telemetry, it needs an
export pipeline to send that telemetry to an observability platform.
All officially supported exporters for the OpenTelemetry project are contained in the [exporters directory](./exporters).
| Exporter | Logs | Metrics | Traces |
|---------------------------------------|:----:|:-------:|:------:|
| [OTLP](./exporters/otlp/) | ✓ | ✓ | ✓ |
| [Prometheus](./exporters/prometheus/) | | ✓ | |
| [stdout](./exporters/stdout/) | ✓ | ✓ | ✓ |
| [Zipkin](./exporters/zipkin/) | | | ✓ |
## Contributing
See the [contributing documentation](CONTRIBUTING.md).

145
vendor/go.opentelemetry.io/otel/RELEASING.md generated vendored Normal file
View File

@ -0,0 +1,145 @@
# Release Process
## Semantic Convention Generation
New versions of the [OpenTelemetry Semantic Conventions] mean new versions of the `semconv` package need to be generated.
The `semconv-generate` make target is used for this.
1. Checkout a local copy of the [OpenTelemetry Semantic Conventions] to the desired release tag.
2. Pull the latest `otel/semconvgen` image: `docker pull otel/semconvgen:latest`
3. Run the `make semconv-generate ...` target from this repository.
For example,
```sh
export TAG="v1.21.0" # Change to the release version you are generating.
export OTEL_SEMCONV_REPO="/absolute/path/to/opentelemetry/semantic-conventions"
docker pull otel/semconvgen:latest
make semconv-generate # Uses the exported TAG and OTEL_SEMCONV_REPO.
```
This should create a new sub-package of [`semconv`](./semconv).
Ensure things look correct before submitting a pull request to include the addition.
## Breaking changes validation
You can run `make gorelease` that runs [gorelease](https://pkg.go.dev/golang.org/x/exp/cmd/gorelease) to ensure that there are no unwanted changes done in the public API.
You can check/report problems with `gorelease` [here](https://golang.org/issues/26420).
## Verify changes for contrib repository
If the changes in the main repository are going to affect the contrib repository, it is important to verify that the changes are compatible with the contrib repository.
Follow [the steps](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/RELEASING.md#verify-otel-changes) in the contrib repository to verify OTel changes.
## Pre-Release
First, decide which module sets will be released and update their versions
in `versions.yaml`. Commit this change to a new branch.
Update go.mod for submodules to depend on the new release which will happen in the next step.
1. Run the `prerelease` make target. It creates a branch
`prerelease_<module set>_<new tag>` that will contain all release changes.
```
make prerelease MODSET=<module set>
```
2. Verify the changes.
```
git diff ...prerelease_<module set>_<new tag>
```
This should have changed the version for all modules to be `<new tag>`.
If these changes look correct, merge them into your pre-release branch:
```go
git merge prerelease_<module set>_<new tag>
```
3. Update the [Changelog](./CHANGELOG.md).
- Make sure all relevant changes for this release are included and are in language that non-contributors to the project can understand.
To verify this, you can look directly at the commits since the `<last tag>`.
```
git --no-pager log --pretty=oneline "<last tag>..HEAD"
```
- Move all the `Unreleased` changes into a new section following the title scheme (`[<new tag>] - <date of release>`).
- Update all the appropriate links at the bottom.
4. Push the changes to upstream and create a Pull Request on GitHub.
Be sure to include the curated changes from the [Changelog](./CHANGELOG.md) in the description.
## Tag
Once the Pull Request with all the version changes has been approved and merged it is time to tag the merged commit.
***IMPORTANT***: It is critical you use the same tag that you used in the Pre-Release step!
Failure to do so will leave things in a broken state. As long as you do not
change `versions.yaml` between pre-release and this step, things should be fine.
***IMPORTANT***: [There is currently no way to remove an incorrectly tagged version of a Go module](https://github.com/golang/go/issues/34189).
It is critical you make sure the version you push upstream is correct.
[Failure to do so will lead to minor emergencies and tough to work around](https://github.com/open-telemetry/opentelemetry-go/issues/331).
1. For each module set that will be released, run the `add-tags` make target
using the `<commit-hash>` of the commit on the main branch for the merged Pull Request.
```
make add-tags MODSET=<module set> COMMIT=<commit hash>
```
It should only be necessary to provide an explicit `COMMIT` value if the
current `HEAD` of your working directory is not the correct commit.
2. Push tags to the upstream remote (not your fork: `github.com/open-telemetry/opentelemetry-go.git`).
Make sure you push all sub-modules as well.
```
git push upstream <new tag>
git push upstream <submodules-path/new tag>
...
```
## Release
Finally create a Release for the new `<new tag>` on GitHub.
The release body should include all the release notes from the Changelog for this release.
## Verify Examples
After releasing verify that examples build outside of the repository.
```
./verify_examples.sh
```
The script copies examples into a different directory removes any `replace` declarations in `go.mod` and builds them.
This ensures they build with the published release, not the local copy.
## Post-Release
### Contrib Repository
Once verified be sure to [make a release for the `contrib` repository](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/RELEASING.md) that uses this release.
### Website Documentation
Update the [Go instrumentation documentation] in the OpenTelemetry website under [content/en/docs/languages/go].
Importantly, bump any package versions referenced to be the latest one you just released and ensure all code examples still compile and are accurate.
[OpenTelemetry Semantic Conventions]: https://github.com/open-telemetry/semantic-conventions
[Go instrumentation documentation]: https://opentelemetry.io/docs/languages/go/
[content/en/docs/languages/go]: https://github.com/open-telemetry/opentelemetry.io/tree/main/content/en/docs/languages/go
### Demo Repository
Bump the dependencies in the following Go services:
- [`accountingservice`](https://github.com/open-telemetry/opentelemetry-demo/tree/main/src/accountingservice)
- [`checkoutservice`](https://github.com/open-telemetry/opentelemetry-demo/tree/main/src/checkoutservice)
- [`productcatalogservice`](https://github.com/open-telemetry/opentelemetry-demo/tree/main/src/productcatalogservice)

224
vendor/go.opentelemetry.io/otel/VERSIONING.md generated vendored Normal file
View File

@ -0,0 +1,224 @@
# Versioning
This document describes the versioning policy for this repository. This policy
is designed so the following goals can be achieved.
**Users are provided a codebase of value that is stable and secure.**
## Policy
* Versioning of this project will be idiomatic of a Go project using [Go
modules](https://github.com/golang/go/wiki/Modules).
* [Semantic import
versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning)
will be used.
* Versions will comply with [semver
2.0](https://semver.org/spec/v2.0.0.html) with the following exceptions.
* New methods may be added to exported API interfaces. All exported
interfaces that fall within this exception will include the following
paragraph in their public documentation.
> Warning: methods may be added to this interface in minor releases.
* If a module is version `v2` or higher, the major version of the module
must be included as a `/vN` at the end of the module paths used in
`go.mod` files (e.g., `module go.opentelemetry.io/otel/v2`, `require
go.opentelemetry.io/otel/v2 v2.0.1`) and in the package import path
(e.g., `import "go.opentelemetry.io/otel/v2/trace"`). This includes the
paths used in `go get` commands (e.g., `go get
go.opentelemetry.io/otel/v2@v2.0.1`. Note there is both a `/v2` and a
`@v2.0.1` in that example. One way to think about it is that the module
name now includes the `/v2`, so include `/v2` whenever you are using the
module name).
* If a module is version `v0` or `v1`, do not include the major version in
either the module path or the import path.
* Modules will be used to encapsulate signals and components.
* Experimental modules still under active development will be versioned at
`v0` to imply the stability guarantee defined by
[semver](https://semver.org/spec/v2.0.0.html#spec-item-4).
> Major version zero (0.y.z) is for initial development. Anything MAY
> change at any time. The public API SHOULD NOT be considered stable.
* Mature modules for which we guarantee a stable public API will be versioned
with a major version greater than `v0`.
* The decision to make a module stable will be made on a case-by-case
basis by the maintainers of this project.
* Experimental modules will start their versioning at `v0.0.0` and will
increment their minor version when backwards incompatible changes are
released and increment their patch version when backwards compatible
changes are released.
* All stable modules that use the same major version number will use the
same entire version number.
* Stable modules may be released with an incremented minor or patch
version even though that module has not been changed, but rather so
that it will remain at the same version as other stable modules that
did undergo change.
* When an experimental module becomes stable a new stable module version
will be released and will include this now stable module. The new
stable module version will be an increment of the minor version number
and will be applied to all existing stable modules as well as the newly
stable module being released.
* Versioning of the associated [contrib
repository](https://github.com/open-telemetry/opentelemetry-go-contrib) of
this project will be idiomatic of a Go project using [Go
modules](https://github.com/golang/go/wiki/Modules).
* [Semantic import
versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning)
will be used.
* Versions will comply with [semver 2.0](https://semver.org/spec/v2.0.0.html).
* If a module is version `v2` or higher, the
major version of the module must be included as a `/vN` at the end of the
module paths used in `go.mod` files (e.g., `module
go.opentelemetry.io/contrib/instrumentation/host/v2`, `require
go.opentelemetry.io/contrib/instrumentation/host/v2 v2.0.1`) and in the
package import path (e.g., `import
"go.opentelemetry.io/contrib/instrumentation/host/v2"`). This includes
the paths used in `go get` commands (e.g., `go get
go.opentelemetry.io/contrib/instrumentation/host/v2@v2.0.1`. Note there
is both a `/v2` and a `@v2.0.1` in that example. One way to think about
it is that the module name now includes the `/v2`, so include `/v2`
whenever you are using the module name).
* If a module is version `v0` or `v1`, do not include the major version
in either the module path or the import path.
* In addition to public APIs, telemetry produced by stable instrumentation
will remain stable and backwards compatible. This is to avoid breaking
alerts and dashboard.
* Modules will be used to encapsulate instrumentation, detectors, exporters,
propagators, and any other independent sets of related components.
* Experimental modules still under active development will be versioned at
`v0` to imply the stability guarantee defined by
[semver](https://semver.org/spec/v2.0.0.html#spec-item-4).
> Major version zero (0.y.z) is for initial development. Anything MAY
> change at any time. The public API SHOULD NOT be considered stable.
* Mature modules for which we guarantee a stable public API and telemetry will
be versioned with a major version greater than `v0`.
* Experimental modules will start their versioning at `v0.0.0` and will
increment their minor version when backwards incompatible changes are
released and increment their patch version when backwards compatible
changes are released.
* Stable contrib modules cannot depend on experimental modules from this
project.
* All stable contrib modules of the same major version with this project
will use the same entire version as this project.
* Stable modules may be released with an incremented minor or patch
version even though that module's code has not been changed. Instead
the only change that will have been included is to have updated that
modules dependency on this project's stable APIs.
* When an experimental module in contrib becomes stable a new stable
module version will be released and will include this now stable
module. The new stable module version will be an increment of the minor
version number and will be applied to all existing stable contrib
modules, this project's modules, and the newly stable module being
released.
* Contrib modules will be kept up to date with this project's releases.
* Due to the dependency contrib modules will implicitly have on this
project's modules the release of stable contrib modules to match the
released version number will be staggered after this project's release.
There is no explicit time guarantee for how long after this projects
release the contrib release will be. Effort should be made to keep them
as close in time as possible.
* No additional stable release in this project can be made until the
contrib repository has a matching stable release.
* No release can be made in the contrib repository after this project's
stable release except for a stable release of the contrib repository.
* GitHub releases will be made for all releases.
* Go modules will be made available at Go package mirrors.
## Example Versioning Lifecycle
To better understand the implementation of the above policy the following
example is provided. This project is simplified to include only the following
modules and their versions:
* `otel`: `v0.14.0`
* `otel/trace`: `v0.14.0`
* `otel/metric`: `v0.14.0`
* `otel/baggage`: `v0.14.0`
* `otel/sdk/trace`: `v0.14.0`
* `otel/sdk/metric`: `v0.14.0`
These modules have been developed to a point where the `otel/trace`,
`otel/baggage`, and `otel/sdk/trace` modules have reached a point that they
should be considered for a stable release. The `otel/metric` and
`otel/sdk/metric` are still under active development and the `otel` module
depends on both `otel/trace` and `otel/metric`.
The `otel` package is refactored to remove its dependencies on `otel/metric` so
it can be released as stable as well. With that done the following release
candidates are made:
* `otel`: `v1.0.0-RC1`
* `otel/trace`: `v1.0.0-RC1`
* `otel/baggage`: `v1.0.0-RC1`
* `otel/sdk/trace`: `v1.0.0-RC1`
The `otel/metric` and `otel/sdk/metric` modules remain at `v0.14.0`.
A few minor issues are discovered in the `otel/trace` package. These issues are
resolved with some minor, but backwards incompatible, changes and are released
as a second release candidate:
* `otel`: `v1.0.0-RC2`
* `otel/trace`: `v1.0.0-RC2`
* `otel/baggage`: `v1.0.0-RC2`
* `otel/sdk/trace`: `v1.0.0-RC2`
Notice that all module version numbers are incremented to adhere to our
versioning policy.
After these release candidates have been evaluated to satisfaction, they are
released as version `v1.0.0`.
* `otel`: `v1.0.0`
* `otel/trace`: `v1.0.0`
* `otel/baggage`: `v1.0.0`
* `otel/sdk/trace`: `v1.0.0`
Since both the `go` utility and the Go module system support [the semantic
versioning definition of
precedence](https://semver.org/spec/v2.0.0.html#spec-item-11), this release
will correctly be interpreted as the successor to the previous release
candidates.
Active development of this project continues. The `otel/metric` module now has
backwards incompatible changes to its API that need to be released and the
`otel/baggage` module has a minor bug fix that needs to be released. The
following release is made:
* `otel`: `v1.0.1`
* `otel/trace`: `v1.0.1`
* `otel/metric`: `v0.15.0`
* `otel/baggage`: `v1.0.1`
* `otel/sdk/trace`: `v1.0.1`
* `otel/sdk/metric`: `v0.15.0`
Notice that, again, all stable module versions are incremented in unison and
the `otel/sdk/metric` package, which depends on the `otel/metric` package, also
bumped its version. This bump of the `otel/sdk/metric` package makes sense
given their coupling, though it is not explicitly required by our versioning
policy.
As we progress, the `otel/metric` and `otel/sdk/metric` packages have reached a
point where they should be evaluated for stability. The `otel` module is
reintegrated with the `otel/metric` package and the following release is made:
* `otel`: `v1.1.0-RC1`
* `otel/trace`: `v1.1.0-RC1`
* `otel/metric`: `v1.1.0-RC1`
* `otel/baggage`: `v1.1.0-RC1`
* `otel/sdk/trace`: `v1.1.0-RC1`
* `otel/sdk/metric`: `v1.1.0-RC1`
All the modules are evaluated and determined to a viable stable release. They
are then released as version `v1.1.0` (the minor version is incremented to
indicate the addition of new signal).
* `otel`: `v1.1.0`
* `otel/trace`: `v1.1.0`
* `otel/metric`: `v1.1.0`
* `otel/baggage`: `v1.1.0`
* `otel/sdk/trace`: `v1.1.0`
* `otel/sdk/metric`: `v1.1.0`

3
vendor/go.opentelemetry.io/otel/attribute/README.md generated vendored Normal file
View File

@ -0,0 +1,3 @@
# Attribute
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/attribute)](https://pkg.go.dev/go.opentelemetry.io/otel/attribute)

5
vendor/go.opentelemetry.io/otel/attribute/doc.go generated vendored Normal file
View File

@ -0,0 +1,5 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package attribute provides key and value attributes.
package attribute // import "go.opentelemetry.io/otel/attribute"

135
vendor/go.opentelemetry.io/otel/attribute/encoder.go generated vendored Normal file
View File

@ -0,0 +1,135 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
import (
"bytes"
"sync"
"sync/atomic"
)
type (
// Encoder is a mechanism for serializing an attribute set into a specific
// string representation that supports caching, to avoid repeated
// serialization. An example could be an exporter encoding the attribute
// set into a wire representation.
Encoder interface {
// Encode returns the serialized encoding of the attribute set using
// its Iterator. This result may be cached by a attribute.Set.
Encode(iterator Iterator) string
// ID returns a value that is unique for each class of attribute
// encoder. Attribute encoders allocate these using `NewEncoderID`.
ID() EncoderID
}
// EncoderID is used to identify distinct Encoder
// implementations, for caching encoded results.
EncoderID struct {
value uint64
}
// defaultAttrEncoder uses a sync.Pool of buffers to reduce the number of
// allocations used in encoding attributes. This implementation encodes a
// comma-separated list of key=value, with '/'-escaping of '=', ',', and
// '\'.
defaultAttrEncoder struct {
// pool is a pool of attribute set builders. The buffers in this pool
// grow to a size that most attribute encodings will not allocate new
// memory.
pool sync.Pool // *bytes.Buffer
}
)
// escapeChar is used to ensure uniqueness of the attribute encoding where
// keys or values contain either '=' or ','. Since there is no parser needed
// for this encoding and its only requirement is to be unique, this choice is
// arbitrary. Users will see these in some exporters (e.g., stdout), so the
// backslash ('\') is used as a conventional choice.
const escapeChar = '\\'
var (
_ Encoder = &defaultAttrEncoder{}
// encoderIDCounter is for generating IDs for other attribute encoders.
encoderIDCounter uint64
defaultEncoderOnce sync.Once
defaultEncoderID = NewEncoderID()
defaultEncoderInstance *defaultAttrEncoder
)
// NewEncoderID returns a unique attribute encoder ID. It should be called
// once per each type of attribute encoder. Preferably in init() or in var
// definition.
func NewEncoderID() EncoderID {
return EncoderID{value: atomic.AddUint64(&encoderIDCounter, 1)}
}
// DefaultEncoder returns an attribute encoder that encodes attributes in such
// a way that each escaped attribute's key is followed by an equal sign and
// then by an escaped attribute's value. All key-value pairs are separated by
// a comma.
//
// Escaping is done by prepending a backslash before either a backslash, equal
// sign or a comma.
func DefaultEncoder() Encoder {
defaultEncoderOnce.Do(func() {
defaultEncoderInstance = &defaultAttrEncoder{
pool: sync.Pool{
New: func() interface{} {
return &bytes.Buffer{}
},
},
}
})
return defaultEncoderInstance
}
// Encode is a part of an implementation of the AttributeEncoder interface.
func (d *defaultAttrEncoder) Encode(iter Iterator) string {
buf := d.pool.Get().(*bytes.Buffer)
defer d.pool.Put(buf)
buf.Reset()
for iter.Next() {
i, keyValue := iter.IndexedAttribute()
if i > 0 {
_, _ = buf.WriteRune(',')
}
copyAndEscape(buf, string(keyValue.Key))
_, _ = buf.WriteRune('=')
if keyValue.Value.Type() == STRING {
copyAndEscape(buf, keyValue.Value.AsString())
} else {
_, _ = buf.WriteString(keyValue.Value.Emit())
}
}
return buf.String()
}
// ID is a part of an implementation of the AttributeEncoder interface.
func (*defaultAttrEncoder) ID() EncoderID {
return defaultEncoderID
}
// copyAndEscape escapes `=`, `,` and its own escape character (`\`),
// making the default encoding unique.
func copyAndEscape(buf *bytes.Buffer, val string) {
for _, ch := range val {
switch ch {
case '=', ',', escapeChar:
_, _ = buf.WriteRune(escapeChar)
}
_, _ = buf.WriteRune(ch)
}
}
// Valid returns true if this encoder ID was allocated by
// `NewEncoderID`. Invalid encoder IDs will not be cached.
func (id EncoderID) Valid() bool {
return id.value != 0
}

49
vendor/go.opentelemetry.io/otel/attribute/filter.go generated vendored Normal file
View File

@ -0,0 +1,49 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
// Filter supports removing certain attributes from attribute sets. When
// the filter returns true, the attribute will be kept in the filtered
// attribute set. When the filter returns false, the attribute is excluded
// from the filtered attribute set, and the attribute instead appears in
// the removed list of excluded attributes.
type Filter func(KeyValue) bool
// NewAllowKeysFilter returns a Filter that only allows attributes with one of
// the provided keys.
//
// If keys is empty a deny-all filter is returned.
func NewAllowKeysFilter(keys ...Key) Filter {
if len(keys) <= 0 {
return func(kv KeyValue) bool { return false }
}
allowed := make(map[Key]struct{})
for _, k := range keys {
allowed[k] = struct{}{}
}
return func(kv KeyValue) bool {
_, ok := allowed[kv.Key]
return ok
}
}
// NewDenyKeysFilter returns a Filter that only allows attributes
// that do not have one of the provided keys.
//
// If keys is empty an allow-all filter is returned.
func NewDenyKeysFilter(keys ...Key) Filter {
if len(keys) <= 0 {
return func(kv KeyValue) bool { return true }
}
forbid := make(map[Key]struct{})
for _, k := range keys {
forbid[k] = struct{}{}
}
return func(kv KeyValue) bool {
_, ok := forbid[kv.Key]
return !ok
}
}

150
vendor/go.opentelemetry.io/otel/attribute/iterator.go generated vendored Normal file
View File

@ -0,0 +1,150 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
// Iterator allows iterating over the set of attributes in order, sorted by
// key.
type Iterator struct {
storage *Set
idx int
}
// MergeIterator supports iterating over two sets of attributes while
// eliminating duplicate values from the combined set. The first iterator
// value takes precedence.
type MergeIterator struct {
one oneIterator
two oneIterator
current KeyValue
}
type oneIterator struct {
iter Iterator
done bool
attr KeyValue
}
// Next moves the iterator to the next position. Returns false if there are no
// more attributes.
func (i *Iterator) Next() bool {
i.idx++
return i.idx < i.Len()
}
// Label returns current KeyValue. Must be called only after Next returns
// true.
//
// Deprecated: Use Attribute instead.
func (i *Iterator) Label() KeyValue {
return i.Attribute()
}
// Attribute returns the current KeyValue of the Iterator. It must be called
// only after Next returns true.
func (i *Iterator) Attribute() KeyValue {
kv, _ := i.storage.Get(i.idx)
return kv
}
// IndexedLabel returns current index and attribute. Must be called only
// after Next returns true.
//
// Deprecated: Use IndexedAttribute instead.
func (i *Iterator) IndexedLabel() (int, KeyValue) {
return i.idx, i.Attribute()
}
// IndexedAttribute returns current index and attribute. Must be called only
// after Next returns true.
func (i *Iterator) IndexedAttribute() (int, KeyValue) {
return i.idx, i.Attribute()
}
// Len returns a number of attributes in the iterated set.
func (i *Iterator) Len() int {
return i.storage.Len()
}
// ToSlice is a convenience function that creates a slice of attributes from
// the passed iterator. The iterator is set up to start from the beginning
// before creating the slice.
func (i *Iterator) ToSlice() []KeyValue {
l := i.Len()
if l == 0 {
return nil
}
i.idx = -1
slice := make([]KeyValue, 0, l)
for i.Next() {
slice = append(slice, i.Attribute())
}
return slice
}
// NewMergeIterator returns a MergeIterator for merging two attribute sets.
// Duplicates are resolved by taking the value from the first set.
func NewMergeIterator(s1, s2 *Set) MergeIterator {
mi := MergeIterator{
one: makeOne(s1.Iter()),
two: makeOne(s2.Iter()),
}
return mi
}
func makeOne(iter Iterator) oneIterator {
oi := oneIterator{
iter: iter,
}
oi.advance()
return oi
}
func (oi *oneIterator) advance() {
if oi.done = !oi.iter.Next(); !oi.done {
oi.attr = oi.iter.Attribute()
}
}
// Next returns true if there is another attribute available.
func (m *MergeIterator) Next() bool {
if m.one.done && m.two.done {
return false
}
if m.one.done {
m.current = m.two.attr
m.two.advance()
return true
}
if m.two.done {
m.current = m.one.attr
m.one.advance()
return true
}
if m.one.attr.Key == m.two.attr.Key {
m.current = m.one.attr // first iterator attribute value wins
m.one.advance()
m.two.advance()
return true
}
if m.one.attr.Key < m.two.attr.Key {
m.current = m.one.attr
m.one.advance()
return true
}
m.current = m.two.attr
m.two.advance()
return true
}
// Label returns the current value after Next() returns true.
//
// Deprecated: Use Attribute instead.
func (m *MergeIterator) Label() KeyValue {
return m.current
}
// Attribute returns the current value after Next() returns true.
func (m *MergeIterator) Attribute() KeyValue {
return m.current
}

123
vendor/go.opentelemetry.io/otel/attribute/key.go generated vendored Normal file
View File

@ -0,0 +1,123 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
// Key represents the key part in key-value pairs. It's a string. The
// allowed character set in the key depends on the use of the key.
type Key string
// Bool creates a KeyValue instance with a BOOL Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Bool(name, value).
func (k Key) Bool(v bool) KeyValue {
return KeyValue{
Key: k,
Value: BoolValue(v),
}
}
// BoolSlice creates a KeyValue instance with a BOOLSLICE Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- BoolSlice(name, value).
func (k Key) BoolSlice(v []bool) KeyValue {
return KeyValue{
Key: k,
Value: BoolSliceValue(v),
}
}
// Int creates a KeyValue instance with an INT64 Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Int(name, value).
func (k Key) Int(v int) KeyValue {
return KeyValue{
Key: k,
Value: IntValue(v),
}
}
// IntSlice creates a KeyValue instance with an INT64SLICE Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- IntSlice(name, value).
func (k Key) IntSlice(v []int) KeyValue {
return KeyValue{
Key: k,
Value: IntSliceValue(v),
}
}
// Int64 creates a KeyValue instance with an INT64 Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Int64(name, value).
func (k Key) Int64(v int64) KeyValue {
return KeyValue{
Key: k,
Value: Int64Value(v),
}
}
// Int64Slice creates a KeyValue instance with an INT64SLICE Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Int64Slice(name, value).
func (k Key) Int64Slice(v []int64) KeyValue {
return KeyValue{
Key: k,
Value: Int64SliceValue(v),
}
}
// Float64 creates a KeyValue instance with a FLOAT64 Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Float64(name, value).
func (k Key) Float64(v float64) KeyValue {
return KeyValue{
Key: k,
Value: Float64Value(v),
}
}
// Float64Slice creates a KeyValue instance with a FLOAT64SLICE Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Float64(name, value).
func (k Key) Float64Slice(v []float64) KeyValue {
return KeyValue{
Key: k,
Value: Float64SliceValue(v),
}
}
// String creates a KeyValue instance with a STRING Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- String(name, value).
func (k Key) String(v string) KeyValue {
return KeyValue{
Key: k,
Value: StringValue(v),
}
}
// StringSlice creates a KeyValue instance with a STRINGSLICE Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- StringSlice(name, value).
func (k Key) StringSlice(v []string) KeyValue {
return KeyValue{
Key: k,
Value: StringSliceValue(v),
}
}
// Defined returns true for non-empty keys.
func (k Key) Defined() bool {
return len(k) != 0
}

75
vendor/go.opentelemetry.io/otel/attribute/kv.go generated vendored Normal file
View File

@ -0,0 +1,75 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
import (
"fmt"
)
// KeyValue holds a key and value pair.
type KeyValue struct {
Key Key
Value Value
}
// Valid returns if kv is a valid OpenTelemetry attribute.
func (kv KeyValue) Valid() bool {
return kv.Key.Defined() && kv.Value.Type() != INVALID
}
// Bool creates a KeyValue with a BOOL Value type.
func Bool(k string, v bool) KeyValue {
return Key(k).Bool(v)
}
// BoolSlice creates a KeyValue with a BOOLSLICE Value type.
func BoolSlice(k string, v []bool) KeyValue {
return Key(k).BoolSlice(v)
}
// Int creates a KeyValue with an INT64 Value type.
func Int(k string, v int) KeyValue {
return Key(k).Int(v)
}
// IntSlice creates a KeyValue with an INT64SLICE Value type.
func IntSlice(k string, v []int) KeyValue {
return Key(k).IntSlice(v)
}
// Int64 creates a KeyValue with an INT64 Value type.
func Int64(k string, v int64) KeyValue {
return Key(k).Int64(v)
}
// Int64Slice creates a KeyValue with an INT64SLICE Value type.
func Int64Slice(k string, v []int64) KeyValue {
return Key(k).Int64Slice(v)
}
// Float64 creates a KeyValue with a FLOAT64 Value type.
func Float64(k string, v float64) KeyValue {
return Key(k).Float64(v)
}
// Float64Slice creates a KeyValue with a FLOAT64SLICE Value type.
func Float64Slice(k string, v []float64) KeyValue {
return Key(k).Float64Slice(v)
}
// String creates a KeyValue with a STRING Value type.
func String(k, v string) KeyValue {
return Key(k).String(v)
}
// StringSlice creates a KeyValue with a STRINGSLICE Value type.
func StringSlice(k string, v []string) KeyValue {
return Key(k).StringSlice(v)
}
// Stringer creates a new key-value pair with a passed name and a string
// value generated by the passed Stringer interface.
func Stringer(k string, v fmt.Stringer) KeyValue {
return Key(k).String(v.String())
}

431
vendor/go.opentelemetry.io/otel/attribute/set.go generated vendored Normal file
View File

@ -0,0 +1,431 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
import (
"cmp"
"encoding/json"
"reflect"
"slices"
"sort"
)
type (
// Set is the representation for a distinct attribute set. It manages an
// immutable set of attributes, with an internal cache for storing
// attribute encodings.
//
// This type will remain comparable for backwards compatibility. The
// equivalence of Sets across versions is not guaranteed to be stable.
// Prior versions may find two Sets to be equal or not when compared
// directly (i.e. ==), but subsequent versions may not. Users should use
// the Equals method to ensure stable equivalence checking.
//
// Users should also use the Distinct returned from Equivalent as a map key
// instead of a Set directly. In addition to that type providing guarantees
// on stable equivalence, it may also provide performance improvements.
Set struct {
equivalent Distinct
}
// Distinct is a unique identifier of a Set.
//
// Distinct is designed to be ensures equivalence stability: comparisons
// will return the save value across versions. For this reason, Distinct
// should always be used as a map key instead of a Set.
Distinct struct {
iface interface{}
}
// Sortable implements sort.Interface, used for sorting KeyValue.
//
// Deprecated: This type is no longer used. It was added as a performance
// optimization for Go < 1.21 that is no longer needed (Go < 1.21 is no
// longer supported by the module).
Sortable []KeyValue
)
var (
// keyValueType is used in computeDistinctReflect.
keyValueType = reflect.TypeOf(KeyValue{})
// emptySet is returned for empty attribute sets.
emptySet = &Set{
equivalent: Distinct{
iface: [0]KeyValue{},
},
}
)
// EmptySet returns a reference to a Set with no elements.
//
// This is a convenience provided for optimized calling utility.
func EmptySet() *Set {
return emptySet
}
// reflectValue abbreviates reflect.ValueOf(d).
func (d Distinct) reflectValue() reflect.Value {
return reflect.ValueOf(d.iface)
}
// Valid returns true if this value refers to a valid Set.
func (d Distinct) Valid() bool {
return d.iface != nil
}
// Len returns the number of attributes in this set.
func (l *Set) Len() int {
if l == nil || !l.equivalent.Valid() {
return 0
}
return l.equivalent.reflectValue().Len()
}
// Get returns the KeyValue at ordered position idx in this set.
func (l *Set) Get(idx int) (KeyValue, bool) {
if l == nil || !l.equivalent.Valid() {
return KeyValue{}, false
}
value := l.equivalent.reflectValue()
if idx >= 0 && idx < value.Len() {
// Note: The Go compiler successfully avoids an allocation for
// the interface{} conversion here:
return value.Index(idx).Interface().(KeyValue), true
}
return KeyValue{}, false
}
// Value returns the value of a specified key in this set.
func (l *Set) Value(k Key) (Value, bool) {
if l == nil || !l.equivalent.Valid() {
return Value{}, false
}
rValue := l.equivalent.reflectValue()
vlen := rValue.Len()
idx := sort.Search(vlen, func(idx int) bool {
return rValue.Index(idx).Interface().(KeyValue).Key >= k
})
if idx >= vlen {
return Value{}, false
}
keyValue := rValue.Index(idx).Interface().(KeyValue)
if k == keyValue.Key {
return keyValue.Value, true
}
return Value{}, false
}
// HasValue tests whether a key is defined in this set.
func (l *Set) HasValue(k Key) bool {
if l == nil {
return false
}
_, ok := l.Value(k)
return ok
}
// Iter returns an iterator for visiting the attributes in this set.
func (l *Set) Iter() Iterator {
return Iterator{
storage: l,
idx: -1,
}
}
// ToSlice returns the set of attributes belonging to this set, sorted, where
// keys appear no more than once.
func (l *Set) ToSlice() []KeyValue {
iter := l.Iter()
return iter.ToSlice()
}
// Equivalent returns a value that may be used as a map key. The Distinct type
// guarantees that the result will equal the equivalent. Distinct value of any
// attribute set with the same elements as this, where sets are made unique by
// choosing the last value in the input for any given key.
func (l *Set) Equivalent() Distinct {
if l == nil || !l.equivalent.Valid() {
return emptySet.equivalent
}
return l.equivalent
}
// Equals returns true if the argument set is equivalent to this set.
func (l *Set) Equals(o *Set) bool {
return l.Equivalent() == o.Equivalent()
}
// Encoded returns the encoded form of this set, according to encoder.
func (l *Set) Encoded(encoder Encoder) string {
if l == nil || encoder == nil {
return ""
}
return encoder.Encode(l.Iter())
}
func empty() Set {
return Set{
equivalent: emptySet.equivalent,
}
}
// NewSet returns a new Set. See the documentation for
// NewSetWithSortableFiltered for more details.
//
// Except for empty sets, this method adds an additional allocation compared
// with calls that include a Sortable.
func NewSet(kvs ...KeyValue) Set {
s, _ := NewSetWithFiltered(kvs, nil)
return s
}
// NewSetWithSortable returns a new Set. See the documentation for
// NewSetWithSortableFiltered for more details.
//
// This call includes a Sortable option as a memory optimization.
//
// Deprecated: Use [NewSet] instead.
func NewSetWithSortable(kvs []KeyValue, _ *Sortable) Set {
s, _ := NewSetWithFiltered(kvs, nil)
return s
}
// NewSetWithFiltered returns a new Set. See the documentation for
// NewSetWithSortableFiltered for more details.
//
// This call includes a Filter to include/exclude attribute keys from the
// return value. Excluded keys are returned as a slice of attribute values.
func NewSetWithFiltered(kvs []KeyValue, filter Filter) (Set, []KeyValue) {
// Check for empty set.
if len(kvs) == 0 {
return empty(), nil
}
// Stable sort so the following de-duplication can implement
// last-value-wins semantics.
slices.SortStableFunc(kvs, func(a, b KeyValue) int {
return cmp.Compare(a.Key, b.Key)
})
position := len(kvs) - 1
offset := position - 1
// The requirements stated above require that the stable
// result be placed in the end of the input slice, while
// overwritten values are swapped to the beginning.
//
// De-duplicate with last-value-wins semantics. Preserve
// duplicate values at the beginning of the input slice.
for ; offset >= 0; offset-- {
if kvs[offset].Key == kvs[position].Key {
continue
}
position--
kvs[offset], kvs[position] = kvs[position], kvs[offset]
}
kvs = kvs[position:]
if filter != nil {
if div := filteredToFront(kvs, filter); div != 0 {
return Set{equivalent: computeDistinct(kvs[div:])}, kvs[:div]
}
}
return Set{equivalent: computeDistinct(kvs)}, nil
}
// NewSetWithSortableFiltered returns a new Set.
//
// Duplicate keys are eliminated by taking the last value. This
// re-orders the input slice so that unique last-values are contiguous
// at the end of the slice.
//
// This ensures the following:
//
// - Last-value-wins semantics
// - Caller sees the reordering, but doesn't lose values
// - Repeated call preserve last-value wins.
//
// Note that methods are defined on Set, although this returns Set. Callers
// can avoid memory allocations by:
//
// - allocating a Sortable for use as a temporary in this method
// - allocating a Set for storing the return value of this constructor.
//
// The result maintains a cache of encoded attributes, by attribute.EncoderID.
// This value should not be copied after its first use.
//
// The second []KeyValue return value is a list of attributes that were
// excluded by the Filter (if non-nil).
//
// Deprecated: Use [NewSetWithFiltered] instead.
func NewSetWithSortableFiltered(kvs []KeyValue, _ *Sortable, filter Filter) (Set, []KeyValue) {
return NewSetWithFiltered(kvs, filter)
}
// filteredToFront filters slice in-place using keep function. All KeyValues that need to
// be removed are moved to the front. All KeyValues that need to be kept are
// moved (in-order) to the back. The index for the first KeyValue to be kept is
// returned.
func filteredToFront(slice []KeyValue, keep Filter) int {
n := len(slice)
j := n
for i := n - 1; i >= 0; i-- {
if keep(slice[i]) {
j--
slice[i], slice[j] = slice[j], slice[i]
}
}
return j
}
// Filter returns a filtered copy of this Set. See the documentation for
// NewSetWithSortableFiltered for more details.
func (l *Set) Filter(re Filter) (Set, []KeyValue) {
if re == nil {
return *l, nil
}
// Iterate in reverse to the first attribute that will be filtered out.
n := l.Len()
first := n - 1
for ; first >= 0; first-- {
kv, _ := l.Get(first)
if !re(kv) {
break
}
}
// No attributes will be dropped, return the immutable Set l and nil.
if first < 0 {
return *l, nil
}
// Copy now that we know we need to return a modified set.
//
// Do not do this in-place on the underlying storage of *Set l. Sets are
// immutable and filtering should not change this.
slice := l.ToSlice()
// Don't re-iterate the slice if only slice[0] is filtered.
if first == 0 {
// It is safe to assume len(slice) >= 1 given we found at least one
// attribute above that needs to be filtered out.
return Set{equivalent: computeDistinct(slice[1:])}, slice[:1]
}
// Move the filtered slice[first] to the front (preserving order).
kv := slice[first]
copy(slice[1:first+1], slice[:first])
slice[0] = kv
// Do not re-evaluate re(slice[first+1:]).
div := filteredToFront(slice[1:first+1], re) + 1
return Set{equivalent: computeDistinct(slice[div:])}, slice[:div]
}
// computeDistinct returns a Distinct using either the fixed- or
// reflect-oriented code path, depending on the size of the input. The input
// slice is assumed to already be sorted and de-duplicated.
func computeDistinct(kvs []KeyValue) Distinct {
iface := computeDistinctFixed(kvs)
if iface == nil {
iface = computeDistinctReflect(kvs)
}
return Distinct{
iface: iface,
}
}
// computeDistinctFixed computes a Distinct for small slices. It returns nil
// if the input is too large for this code path.
func computeDistinctFixed(kvs []KeyValue) interface{} {
switch len(kvs) {
case 1:
ptr := new([1]KeyValue)
copy((*ptr)[:], kvs)
return *ptr
case 2:
ptr := new([2]KeyValue)
copy((*ptr)[:], kvs)
return *ptr
case 3:
ptr := new([3]KeyValue)
copy((*ptr)[:], kvs)
return *ptr
case 4:
ptr := new([4]KeyValue)
copy((*ptr)[:], kvs)
return *ptr
case 5:
ptr := new([5]KeyValue)
copy((*ptr)[:], kvs)
return *ptr
case 6:
ptr := new([6]KeyValue)
copy((*ptr)[:], kvs)
return *ptr
case 7:
ptr := new([7]KeyValue)
copy((*ptr)[:], kvs)
return *ptr
case 8:
ptr := new([8]KeyValue)
copy((*ptr)[:], kvs)
return *ptr
case 9:
ptr := new([9]KeyValue)
copy((*ptr)[:], kvs)
return *ptr
case 10:
ptr := new([10]KeyValue)
copy((*ptr)[:], kvs)
return *ptr
default:
return nil
}
}
// computeDistinctReflect computes a Distinct using reflection, works for any
// size input.
func computeDistinctReflect(kvs []KeyValue) interface{} {
at := reflect.New(reflect.ArrayOf(len(kvs), keyValueType)).Elem()
for i, keyValue := range kvs {
*(at.Index(i).Addr().Interface().(*KeyValue)) = keyValue
}
return at.Interface()
}
// MarshalJSON returns the JSON encoding of the Set.
func (l *Set) MarshalJSON() ([]byte, error) {
return json.Marshal(l.equivalent.iface)
}
// MarshalLog is the marshaling function used by the logging system to represent this Set.
func (l Set) MarshalLog() interface{} {
kvs := make(map[string]string)
for _, kv := range l.ToSlice() {
kvs[string(kv.Key)] = kv.Value.Emit()
}
return kvs
}
// Len implements sort.Interface.
func (l *Sortable) Len() int {
return len(*l)
}
// Swap implements sort.Interface.
func (l *Sortable) Swap(i, j int) {
(*l)[i], (*l)[j] = (*l)[j], (*l)[i]
}
// Less implements sort.Interface.
func (l *Sortable) Less(i, j int) bool {
return (*l)[i].Key < (*l)[j].Key
}

View File

@ -0,0 +1,31 @@
// Code generated by "stringer -type=Type"; DO NOT EDIT.
package attribute
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[INVALID-0]
_ = x[BOOL-1]
_ = x[INT64-2]
_ = x[FLOAT64-3]
_ = x[STRING-4]
_ = x[BOOLSLICE-5]
_ = x[INT64SLICE-6]
_ = x[FLOAT64SLICE-7]
_ = x[STRINGSLICE-8]
}
const _Type_name = "INVALIDBOOLINT64FLOAT64STRINGBOOLSLICEINT64SLICEFLOAT64SLICESTRINGSLICE"
var _Type_index = [...]uint8{0, 7, 11, 16, 23, 29, 38, 48, 60, 71}
func (i Type) String() string {
if i < 0 || i >= Type(len(_Type_index)-1) {
return "Type(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _Type_name[_Type_index[i]:_Type_index[i+1]]
}

271
vendor/go.opentelemetry.io/otel/attribute/value.go generated vendored Normal file
View File

@ -0,0 +1,271 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
import (
"encoding/json"
"fmt"
"reflect"
"strconv"
"go.opentelemetry.io/otel/internal"
"go.opentelemetry.io/otel/internal/attribute"
)
//go:generate stringer -type=Type
// Type describes the type of the data Value holds.
type Type int // nolint: revive // redefines builtin Type.
// Value represents the value part in key-value pairs.
type Value struct {
vtype Type
numeric uint64
stringly string
slice interface{}
}
const (
// INVALID is used for a Value with no value set.
INVALID Type = iota
// BOOL is a boolean Type Value.
BOOL
// INT64 is a 64-bit signed integral Type Value.
INT64
// FLOAT64 is a 64-bit floating point Type Value.
FLOAT64
// STRING is a string Type Value.
STRING
// BOOLSLICE is a slice of booleans Type Value.
BOOLSLICE
// INT64SLICE is a slice of 64-bit signed integral numbers Type Value.
INT64SLICE
// FLOAT64SLICE is a slice of 64-bit floating point numbers Type Value.
FLOAT64SLICE
// STRINGSLICE is a slice of strings Type Value.
STRINGSLICE
)
// BoolValue creates a BOOL Value.
func BoolValue(v bool) Value {
return Value{
vtype: BOOL,
numeric: internal.BoolToRaw(v),
}
}
// BoolSliceValue creates a BOOLSLICE Value.
func BoolSliceValue(v []bool) Value {
return Value{vtype: BOOLSLICE, slice: attribute.BoolSliceValue(v)}
}
// IntValue creates an INT64 Value.
func IntValue(v int) Value {
return Int64Value(int64(v))
}
// IntSliceValue creates an INTSLICE Value.
func IntSliceValue(v []int) Value {
var int64Val int64
cp := reflect.New(reflect.ArrayOf(len(v), reflect.TypeOf(int64Val)))
for i, val := range v {
cp.Elem().Index(i).SetInt(int64(val))
}
return Value{
vtype: INT64SLICE,
slice: cp.Elem().Interface(),
}
}
// Int64Value creates an INT64 Value.
func Int64Value(v int64) Value {
return Value{
vtype: INT64,
numeric: internal.Int64ToRaw(v),
}
}
// Int64SliceValue creates an INT64SLICE Value.
func Int64SliceValue(v []int64) Value {
return Value{vtype: INT64SLICE, slice: attribute.Int64SliceValue(v)}
}
// Float64Value creates a FLOAT64 Value.
func Float64Value(v float64) Value {
return Value{
vtype: FLOAT64,
numeric: internal.Float64ToRaw(v),
}
}
// Float64SliceValue creates a FLOAT64SLICE Value.
func Float64SliceValue(v []float64) Value {
return Value{vtype: FLOAT64SLICE, slice: attribute.Float64SliceValue(v)}
}
// StringValue creates a STRING Value.
func StringValue(v string) Value {
return Value{
vtype: STRING,
stringly: v,
}
}
// StringSliceValue creates a STRINGSLICE Value.
func StringSliceValue(v []string) Value {
return Value{vtype: STRINGSLICE, slice: attribute.StringSliceValue(v)}
}
// Type returns a type of the Value.
func (v Value) Type() Type {
return v.vtype
}
// AsBool returns the bool value. Make sure that the Value's type is
// BOOL.
func (v Value) AsBool() bool {
return internal.RawToBool(v.numeric)
}
// AsBoolSlice returns the []bool value. Make sure that the Value's type is
// BOOLSLICE.
func (v Value) AsBoolSlice() []bool {
if v.vtype != BOOLSLICE {
return nil
}
return v.asBoolSlice()
}
func (v Value) asBoolSlice() []bool {
return attribute.AsBoolSlice(v.slice)
}
// AsInt64 returns the int64 value. Make sure that the Value's type is
// INT64.
func (v Value) AsInt64() int64 {
return internal.RawToInt64(v.numeric)
}
// AsInt64Slice returns the []int64 value. Make sure that the Value's type is
// INT64SLICE.
func (v Value) AsInt64Slice() []int64 {
if v.vtype != INT64SLICE {
return nil
}
return v.asInt64Slice()
}
func (v Value) asInt64Slice() []int64 {
return attribute.AsInt64Slice(v.slice)
}
// AsFloat64 returns the float64 value. Make sure that the Value's
// type is FLOAT64.
func (v Value) AsFloat64() float64 {
return internal.RawToFloat64(v.numeric)
}
// AsFloat64Slice returns the []float64 value. Make sure that the Value's type is
// FLOAT64SLICE.
func (v Value) AsFloat64Slice() []float64 {
if v.vtype != FLOAT64SLICE {
return nil
}
return v.asFloat64Slice()
}
func (v Value) asFloat64Slice() []float64 {
return attribute.AsFloat64Slice(v.slice)
}
// AsString returns the string value. Make sure that the Value's type
// is STRING.
func (v Value) AsString() string {
return v.stringly
}
// AsStringSlice returns the []string value. Make sure that the Value's type is
// STRINGSLICE.
func (v Value) AsStringSlice() []string {
if v.vtype != STRINGSLICE {
return nil
}
return v.asStringSlice()
}
func (v Value) asStringSlice() []string {
return attribute.AsStringSlice(v.slice)
}
type unknownValueType struct{}
// AsInterface returns Value's data as interface{}.
func (v Value) AsInterface() interface{} {
switch v.Type() {
case BOOL:
return v.AsBool()
case BOOLSLICE:
return v.asBoolSlice()
case INT64:
return v.AsInt64()
case INT64SLICE:
return v.asInt64Slice()
case FLOAT64:
return v.AsFloat64()
case FLOAT64SLICE:
return v.asFloat64Slice()
case STRING:
return v.stringly
case STRINGSLICE:
return v.asStringSlice()
}
return unknownValueType{}
}
// Emit returns a string representation of Value's data.
func (v Value) Emit() string {
switch v.Type() {
case BOOLSLICE:
return fmt.Sprint(v.asBoolSlice())
case BOOL:
return strconv.FormatBool(v.AsBool())
case INT64SLICE:
j, err := json.Marshal(v.asInt64Slice())
if err != nil {
return fmt.Sprintf("invalid: %v", v.asInt64Slice())
}
return string(j)
case INT64:
return strconv.FormatInt(v.AsInt64(), 10)
case FLOAT64SLICE:
j, err := json.Marshal(v.asFloat64Slice())
if err != nil {
return fmt.Sprintf("invalid: %v", v.asFloat64Slice())
}
return string(j)
case FLOAT64:
return fmt.Sprint(v.AsFloat64())
case STRINGSLICE:
j, err := json.Marshal(v.asStringSlice())
if err != nil {
return fmt.Sprintf("invalid: %v", v.asStringSlice())
}
return string(j)
case STRING:
return v.stringly
default:
return "unknown"
}
}
// MarshalJSON returns the JSON encoding of the Value.
func (v Value) MarshalJSON() ([]byte, error) {
var jsonVal struct {
Type string
Value interface{}
}
jsonVal.Type = v.Type().String()
jsonVal.Value = v.AsInterface()
return json.Marshal(jsonVal)
}

3
vendor/go.opentelemetry.io/otel/baggage/README.md generated vendored Normal file
View File

@ -0,0 +1,3 @@
# Baggage
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/baggage)](https://pkg.go.dev/go.opentelemetry.io/otel/baggage)

910
vendor/go.opentelemetry.io/otel/baggage/baggage.go generated vendored Normal file
View File

@ -0,0 +1,910 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package baggage // import "go.opentelemetry.io/otel/baggage"
import (
"errors"
"fmt"
"net/url"
"strings"
"unicode/utf8"
"go.opentelemetry.io/otel/internal/baggage"
)
const (
maxMembers = 180
maxBytesPerMembers = 4096
maxBytesPerBaggageString = 8192
listDelimiter = ","
keyValueDelimiter = "="
propertyDelimiter = ";"
)
var (
errInvalidKey = errors.New("invalid key")
errInvalidValue = errors.New("invalid value")
errInvalidProperty = errors.New("invalid baggage list-member property")
errInvalidMember = errors.New("invalid baggage list-member")
errMemberNumber = errors.New("too many list-members in baggage-string")
errMemberBytes = errors.New("list-member too large")
errBaggageBytes = errors.New("baggage-string too large")
)
// Property is an additional metadata entry for a baggage list-member.
type Property struct {
key, value string
// hasValue indicates if a zero-value value means the property does not
// have a value or if it was the zero-value.
hasValue bool
}
// NewKeyProperty returns a new Property for key.
//
// If key is invalid, an error will be returned.
func NewKeyProperty(key string) (Property, error) {
if !validateKey(key) {
return newInvalidProperty(), fmt.Errorf("%w: %q", errInvalidKey, key)
}
p := Property{key: key}
return p, nil
}
// NewKeyValueProperty returns a new Property for key with value.
//
// The passed key must be compliant with W3C Baggage specification.
// The passed value must be percent-encoded as defined in W3C Baggage specification.
//
// Notice: Consider using [NewKeyValuePropertyRaw] instead
// that does not require percent-encoding of the value.
func NewKeyValueProperty(key, value string) (Property, error) {
if !validateValue(value) {
return newInvalidProperty(), fmt.Errorf("%w: %q", errInvalidValue, value)
}
decodedValue, err := url.PathUnescape(value)
if err != nil {
return newInvalidProperty(), fmt.Errorf("%w: %q", errInvalidValue, value)
}
return NewKeyValuePropertyRaw(key, decodedValue)
}
// NewKeyValuePropertyRaw returns a new Property for key with value.
//
// The passed key must be compliant with W3C Baggage specification.
func NewKeyValuePropertyRaw(key, value string) (Property, error) {
if !validateKey(key) {
return newInvalidProperty(), fmt.Errorf("%w: %q", errInvalidKey, key)
}
p := Property{
key: key,
value: value,
hasValue: true,
}
return p, nil
}
func newInvalidProperty() Property {
return Property{}
}
// parseProperty attempts to decode a Property from the passed string. It
// returns an error if the input is invalid according to the W3C Baggage
// specification.
func parseProperty(property string) (Property, error) {
if property == "" {
return newInvalidProperty(), nil
}
p, ok := parsePropertyInternal(property)
if !ok {
return newInvalidProperty(), fmt.Errorf("%w: %q", errInvalidProperty, property)
}
return p, nil
}
// validate ensures p conforms to the W3C Baggage specification, returning an
// error otherwise.
func (p Property) validate() error {
errFunc := func(err error) error {
return fmt.Errorf("invalid property: %w", err)
}
if !validateKey(p.key) {
return errFunc(fmt.Errorf("%w: %q", errInvalidKey, p.key))
}
if !p.hasValue && p.value != "" {
return errFunc(errors.New("inconsistent value"))
}
return nil
}
// Key returns the Property key.
func (p Property) Key() string {
return p.key
}
// Value returns the Property value. Additionally, a boolean value is returned
// indicating if the returned value is the empty if the Property has a value
// that is empty or if the value is not set.
func (p Property) Value() (string, bool) {
return p.value, p.hasValue
}
// String encodes Property into a header string compliant with the W3C Baggage
// specification.
func (p Property) String() string {
if p.hasValue {
return fmt.Sprintf("%s%s%v", p.key, keyValueDelimiter, valueEscape(p.value))
}
return p.key
}
type properties []Property
func fromInternalProperties(iProps []baggage.Property) properties {
if len(iProps) == 0 {
return nil
}
props := make(properties, len(iProps))
for i, p := range iProps {
props[i] = Property{
key: p.Key,
value: p.Value,
hasValue: p.HasValue,
}
}
return props
}
func (p properties) asInternal() []baggage.Property {
if len(p) == 0 {
return nil
}
iProps := make([]baggage.Property, len(p))
for i, prop := range p {
iProps[i] = baggage.Property{
Key: prop.key,
Value: prop.value,
HasValue: prop.hasValue,
}
}
return iProps
}
func (p properties) Copy() properties {
if len(p) == 0 {
return nil
}
props := make(properties, len(p))
copy(props, p)
return props
}
// validate ensures each Property in p conforms to the W3C Baggage
// specification, returning an error otherwise.
func (p properties) validate() error {
for _, prop := range p {
if err := prop.validate(); err != nil {
return err
}
}
return nil
}
// String encodes properties into a header string compliant with the W3C Baggage
// specification.
func (p properties) String() string {
props := make([]string, len(p))
for i, prop := range p {
props[i] = prop.String()
}
return strings.Join(props, propertyDelimiter)
}
// Member is a list-member of a baggage-string as defined by the W3C Baggage
// specification.
type Member struct {
key, value string
properties properties
// hasData indicates whether the created property contains data or not.
// Properties that do not contain data are invalid with no other check
// required.
hasData bool
}
// NewMember returns a new Member from the passed arguments.
//
// The passed key must be compliant with W3C Baggage specification.
// The passed value must be percent-encoded as defined in W3C Baggage specification.
//
// Notice: Consider using [NewMemberRaw] instead
// that does not require percent-encoding of the value.
func NewMember(key, value string, props ...Property) (Member, error) {
if !validateValue(value) {
return newInvalidMember(), fmt.Errorf("%w: %q", errInvalidValue, value)
}
decodedValue, err := url.PathUnescape(value)
if err != nil {
return newInvalidMember(), fmt.Errorf("%w: %q", errInvalidValue, value)
}
return NewMemberRaw(key, decodedValue, props...)
}
// NewMemberRaw returns a new Member from the passed arguments.
//
// The passed key must be compliant with W3C Baggage specification.
func NewMemberRaw(key, value string, props ...Property) (Member, error) {
m := Member{
key: key,
value: value,
properties: properties(props).Copy(),
hasData: true,
}
if err := m.validate(); err != nil {
return newInvalidMember(), err
}
return m, nil
}
func newInvalidMember() Member {
return Member{}
}
// parseMember attempts to decode a Member from the passed string. It returns
// an error if the input is invalid according to the W3C Baggage
// specification.
func parseMember(member string) (Member, error) {
if n := len(member); n > maxBytesPerMembers {
return newInvalidMember(), fmt.Errorf("%w: %d", errMemberBytes, n)
}
var props properties
keyValue, properties, found := strings.Cut(member, propertyDelimiter)
if found {
// Parse the member properties.
for _, pStr := range strings.Split(properties, propertyDelimiter) {
p, err := parseProperty(pStr)
if err != nil {
return newInvalidMember(), err
}
props = append(props, p)
}
}
// Parse the member key/value pair.
// Take into account a value can contain equal signs (=).
k, v, found := strings.Cut(keyValue, keyValueDelimiter)
if !found {
return newInvalidMember(), fmt.Errorf("%w: %q", errInvalidMember, member)
}
// "Leading and trailing whitespaces are allowed but MUST be trimmed
// when converting the header into a data structure."
key := strings.TrimSpace(k)
if !validateKey(key) {
return newInvalidMember(), fmt.Errorf("%w: %q", errInvalidKey, key)
}
val := strings.TrimSpace(v)
if !validateValue(val) {
return newInvalidMember(), fmt.Errorf("%w: %q", errInvalidValue, v)
}
// Decode a percent-encoded value.
value, err := url.PathUnescape(val)
if err != nil {
return newInvalidMember(), fmt.Errorf("%w: %w", errInvalidValue, err)
}
return Member{key: key, value: value, properties: props, hasData: true}, nil
}
// validate ensures m conforms to the W3C Baggage specification.
// A key must be an ASCII string, returning an error otherwise.
func (m Member) validate() error {
if !m.hasData {
return fmt.Errorf("%w: %q", errInvalidMember, m)
}
if !validateKey(m.key) {
return fmt.Errorf("%w: %q", errInvalidKey, m.key)
}
return m.properties.validate()
}
// Key returns the Member key.
func (m Member) Key() string { return m.key }
// Value returns the Member value.
func (m Member) Value() string { return m.value }
// Properties returns a copy of the Member properties.
func (m Member) Properties() []Property { return m.properties.Copy() }
// String encodes Member into a header string compliant with the W3C Baggage
// specification.
func (m Member) String() string {
// A key is just an ASCII string. A value is restricted to be
// US-ASCII characters excluding CTLs, whitespace,
// DQUOTE, comma, semicolon, and backslash.
s := m.key + keyValueDelimiter + valueEscape(m.value)
if len(m.properties) > 0 {
s += propertyDelimiter + m.properties.String()
}
return s
}
// Baggage is a list of baggage members representing the baggage-string as
// defined by the W3C Baggage specification.
type Baggage struct { //nolint:golint
list baggage.List
}
// New returns a new valid Baggage. It returns an error if it results in a
// Baggage exceeding limits set in that specification.
//
// It expects all the provided members to have already been validated.
func New(members ...Member) (Baggage, error) {
if len(members) == 0 {
return Baggage{}, nil
}
b := make(baggage.List)
for _, m := range members {
if !m.hasData {
return Baggage{}, errInvalidMember
}
// OpenTelemetry resolves duplicates by last-one-wins.
b[m.key] = baggage.Item{
Value: m.value,
Properties: m.properties.asInternal(),
}
}
// Check member numbers after deduplication.
if len(b) > maxMembers {
return Baggage{}, errMemberNumber
}
bag := Baggage{b}
if n := len(bag.String()); n > maxBytesPerBaggageString {
return Baggage{}, fmt.Errorf("%w: %d", errBaggageBytes, n)
}
return bag, nil
}
// Parse attempts to decode a baggage-string from the passed string. It
// returns an error if the input is invalid according to the W3C Baggage
// specification.
//
// If there are duplicate list-members contained in baggage, the last one
// defined (reading left-to-right) will be the only one kept. This diverges
// from the W3C Baggage specification which allows duplicate list-members, but
// conforms to the OpenTelemetry Baggage specification.
func Parse(bStr string) (Baggage, error) {
if bStr == "" {
return Baggage{}, nil
}
if n := len(bStr); n > maxBytesPerBaggageString {
return Baggage{}, fmt.Errorf("%w: %d", errBaggageBytes, n)
}
b := make(baggage.List)
for _, memberStr := range strings.Split(bStr, listDelimiter) {
m, err := parseMember(memberStr)
if err != nil {
return Baggage{}, err
}
// OpenTelemetry resolves duplicates by last-one-wins.
b[m.key] = baggage.Item{
Value: m.value,
Properties: m.properties.asInternal(),
}
}
// OpenTelemetry does not allow for duplicate list-members, but the W3C
// specification does. Now that we have deduplicated, ensure the baggage
// does not exceed list-member limits.
if len(b) > maxMembers {
return Baggage{}, errMemberNumber
}
return Baggage{b}, nil
}
// Member returns the baggage list-member identified by key.
//
// If there is no list-member matching the passed key the returned Member will
// be a zero-value Member.
// The returned member is not validated, as we assume the validation happened
// when it was added to the Baggage.
func (b Baggage) Member(key string) Member {
v, ok := b.list[key]
if !ok {
// We do not need to worry about distinguishing between the situation
// where a zero-valued Member is included in the Baggage because a
// zero-valued Member is invalid according to the W3C Baggage
// specification (it has an empty key).
return newInvalidMember()
}
return Member{
key: key,
value: v.Value,
properties: fromInternalProperties(v.Properties),
hasData: true,
}
}
// Members returns all the baggage list-members.
// The order of the returned list-members does not have significance.
//
// The returned members are not validated, as we assume the validation happened
// when they were added to the Baggage.
func (b Baggage) Members() []Member {
if len(b.list) == 0 {
return nil
}
members := make([]Member, 0, len(b.list))
for k, v := range b.list {
members = append(members, Member{
key: k,
value: v.Value,
properties: fromInternalProperties(v.Properties),
hasData: true,
})
}
return members
}
// SetMember returns a copy the Baggage with the member included. If the
// baggage contains a Member with the same key the existing Member is
// replaced.
//
// If member is invalid according to the W3C Baggage specification, an error
// is returned with the original Baggage.
func (b Baggage) SetMember(member Member) (Baggage, error) {
if !member.hasData {
return b, errInvalidMember
}
n := len(b.list)
if _, ok := b.list[member.key]; !ok {
n++
}
list := make(baggage.List, n)
for k, v := range b.list {
// Do not copy if we are just going to overwrite.
if k == member.key {
continue
}
list[k] = v
}
list[member.key] = baggage.Item{
Value: member.value,
Properties: member.properties.asInternal(),
}
return Baggage{list: list}, nil
}
// DeleteMember returns a copy of the Baggage with the list-member identified
// by key removed.
func (b Baggage) DeleteMember(key string) Baggage {
n := len(b.list)
if _, ok := b.list[key]; ok {
n--
}
list := make(baggage.List, n)
for k, v := range b.list {
if k == key {
continue
}
list[k] = v
}
return Baggage{list: list}
}
// Len returns the number of list-members in the Baggage.
func (b Baggage) Len() int {
return len(b.list)
}
// String encodes Baggage into a header string compliant with the W3C Baggage
// specification.
func (b Baggage) String() string {
members := make([]string, 0, len(b.list))
for k, v := range b.list {
members = append(members, Member{
key: k,
value: v.Value,
properties: fromInternalProperties(v.Properties),
}.String())
}
return strings.Join(members, listDelimiter)
}
// parsePropertyInternal attempts to decode a Property from the passed string.
// It follows the spec at https://www.w3.org/TR/baggage/#definition.
func parsePropertyInternal(s string) (p Property, ok bool) {
// For the entire function we will use " key = value " as an example.
// Attempting to parse the key.
// First skip spaces at the beginning "< >key = value " (they could be empty).
index := skipSpace(s, 0)
// Parse the key: " <key> = value ".
keyStart := index
keyEnd := index
for _, c := range s[keyStart:] {
if !validateKeyChar(c) {
break
}
keyEnd++
}
// If we couldn't find any valid key character,
// it means the key is either empty or invalid.
if keyStart == keyEnd {
return
}
// Skip spaces after the key: " key< >= value ".
index = skipSpace(s, keyEnd)
if index == len(s) {
// A key can have no value, like: " key ".
ok = true
p.key = s[keyStart:keyEnd]
return
}
// If we have not reached the end and we can't find the '=' delimiter,
// it means the property is invalid.
if s[index] != keyValueDelimiter[0] {
return
}
// Attempting to parse the value.
// Match: " key =< >value ".
index = skipSpace(s, index+1)
// Match the value string: " key = <value> ".
// A valid property can be: " key =".
// Therefore, we don't have to check if the value is empty.
valueStart := index
valueEnd := index
for _, c := range s[valueStart:] {
if !validateValueChar(c) {
break
}
valueEnd++
}
// Skip all trailing whitespaces: " key = value< >".
index = skipSpace(s, valueEnd)
// If after looking for the value and skipping whitespaces
// we have not reached the end, it means the property is
// invalid, something like: " key = value value1".
if index != len(s) {
return
}
// Decode a percent-encoded value.
value, err := url.PathUnescape(s[valueStart:valueEnd])
if err != nil {
return
}
ok = true
p.key = s[keyStart:keyEnd]
p.hasValue = true
p.value = value
return
}
func skipSpace(s string, offset int) int {
i := offset
for ; i < len(s); i++ {
c := s[i]
if c != ' ' && c != '\t' {
break
}
}
return i
}
var safeKeyCharset = [utf8.RuneSelf]bool{
// 0x23 to 0x27
'#': true,
'$': true,
'%': true,
'&': true,
'\'': true,
// 0x30 to 0x39
'0': true,
'1': true,
'2': true,
'3': true,
'4': true,
'5': true,
'6': true,
'7': true,
'8': true,
'9': true,
// 0x41 to 0x5a
'A': true,
'B': true,
'C': true,
'D': true,
'E': true,
'F': true,
'G': true,
'H': true,
'I': true,
'J': true,
'K': true,
'L': true,
'M': true,
'N': true,
'O': true,
'P': true,
'Q': true,
'R': true,
'S': true,
'T': true,
'U': true,
'V': true,
'W': true,
'X': true,
'Y': true,
'Z': true,
// 0x5e to 0x7a
'^': true,
'_': true,
'`': true,
'a': true,
'b': true,
'c': true,
'd': true,
'e': true,
'f': true,
'g': true,
'h': true,
'i': true,
'j': true,
'k': true,
'l': true,
'm': true,
'n': true,
'o': true,
'p': true,
'q': true,
'r': true,
's': true,
't': true,
'u': true,
'v': true,
'w': true,
'x': true,
'y': true,
'z': true,
// remainder
'!': true,
'*': true,
'+': true,
'-': true,
'.': true,
'|': true,
'~': true,
}
func validateKey(s string) bool {
if len(s) == 0 {
return false
}
for _, c := range s {
if !validateKeyChar(c) {
return false
}
}
return true
}
func validateKeyChar(c int32) bool {
return c >= 0 && c < int32(utf8.RuneSelf) && safeKeyCharset[c]
}
func validateValue(s string) bool {
for _, c := range s {
if !validateValueChar(c) {
return false
}
}
return true
}
var safeValueCharset = [utf8.RuneSelf]bool{
'!': true, // 0x21
// 0x23 to 0x2b
'#': true,
'$': true,
'%': true,
'&': true,
'\'': true,
'(': true,
')': true,
'*': true,
'+': true,
// 0x2d to 0x3a
'-': true,
'.': true,
'/': true,
'0': true,
'1': true,
'2': true,
'3': true,
'4': true,
'5': true,
'6': true,
'7': true,
'8': true,
'9': true,
':': true,
// 0x3c to 0x5b
'<': true, // 0x3C
'=': true, // 0x3D
'>': true, // 0x3E
'?': true, // 0x3F
'@': true, // 0x40
'A': true, // 0x41
'B': true, // 0x42
'C': true, // 0x43
'D': true, // 0x44
'E': true, // 0x45
'F': true, // 0x46
'G': true, // 0x47
'H': true, // 0x48
'I': true, // 0x49
'J': true, // 0x4A
'K': true, // 0x4B
'L': true, // 0x4C
'M': true, // 0x4D
'N': true, // 0x4E
'O': true, // 0x4F
'P': true, // 0x50
'Q': true, // 0x51
'R': true, // 0x52
'S': true, // 0x53
'T': true, // 0x54
'U': true, // 0x55
'V': true, // 0x56
'W': true, // 0x57
'X': true, // 0x58
'Y': true, // 0x59
'Z': true, // 0x5A
'[': true, // 0x5B
// 0x5d to 0x7e
']': true, // 0x5D
'^': true, // 0x5E
'_': true, // 0x5F
'`': true, // 0x60
'a': true, // 0x61
'b': true, // 0x62
'c': true, // 0x63
'd': true, // 0x64
'e': true, // 0x65
'f': true, // 0x66
'g': true, // 0x67
'h': true, // 0x68
'i': true, // 0x69
'j': true, // 0x6A
'k': true, // 0x6B
'l': true, // 0x6C
'm': true, // 0x6D
'n': true, // 0x6E
'o': true, // 0x6F
'p': true, // 0x70
'q': true, // 0x71
'r': true, // 0x72
's': true, // 0x73
't': true, // 0x74
'u': true, // 0x75
'v': true, // 0x76
'w': true, // 0x77
'x': true, // 0x78
'y': true, // 0x79
'z': true, // 0x7A
'{': true, // 0x7B
'|': true, // 0x7C
'}': true, // 0x7D
'~': true, // 0x7E
}
func validateValueChar(c int32) bool {
return c >= 0 && c < int32(utf8.RuneSelf) && safeValueCharset[c]
}
// valueEscape escapes the string so it can be safely placed inside a baggage value,
// replacing special characters with %XX sequences as needed.
//
// The implementation is based on:
// https://github.com/golang/go/blob/f6509cf5cdbb5787061b784973782933c47f1782/src/net/url/url.go#L285.
func valueEscape(s string) string {
hexCount := 0
for i := 0; i < len(s); i++ {
c := s[i]
if shouldEscape(c) {
hexCount++
}
}
if hexCount == 0 {
return s
}
var buf [64]byte
var t []byte
required := len(s) + 2*hexCount
if required <= len(buf) {
t = buf[:required]
} else {
t = make([]byte, required)
}
j := 0
for i := 0; i < len(s); i++ {
c := s[i]
if shouldEscape(s[i]) {
const upperhex = "0123456789ABCDEF"
t[j] = '%'
t[j+1] = upperhex[c>>4]
t[j+2] = upperhex[c&15]
j += 3
} else {
t[j] = c
j++
}
}
return string(t)
}
// shouldEscape returns true if the specified byte should be escaped when
// appearing in a baggage value string.
func shouldEscape(c byte) bool {
if c == '%' {
// The percent character must be encoded so that percent-encoding can work.
return true
}
return !validateValueChar(int32(c))
}

28
vendor/go.opentelemetry.io/otel/baggage/context.go generated vendored Normal file
View File

@ -0,0 +1,28 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package baggage // import "go.opentelemetry.io/otel/baggage"
import (
"context"
"go.opentelemetry.io/otel/internal/baggage"
)
// ContextWithBaggage returns a copy of parent with baggage.
func ContextWithBaggage(parent context.Context, b Baggage) context.Context {
// Delegate so any hooks for the OpenTracing bridge are handled.
return baggage.ContextWithList(parent, b.list)
}
// ContextWithoutBaggage returns a copy of parent with no baggage.
func ContextWithoutBaggage(parent context.Context) context.Context {
// Delegate so any hooks for the OpenTracing bridge are handled.
return baggage.ContextWithList(parent, nil)
}
// FromContext returns the baggage contained in ctx.
func FromContext(ctx context.Context) Baggage {
// Delegate so any hooks for the OpenTracing bridge are handled.
return Baggage{list: baggage.ListFromContext(ctx)}
}

9
vendor/go.opentelemetry.io/otel/baggage/doc.go generated vendored Normal file
View File

@ -0,0 +1,9 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package baggage provides functionality for storing and retrieving
baggage items in Go context. For propagating the baggage, see the
go.opentelemetry.io/otel/propagation package.
*/
package baggage // import "go.opentelemetry.io/otel/baggage"

3
vendor/go.opentelemetry.io/otel/codes/README.md generated vendored Normal file
View File

@ -0,0 +1,3 @@
# Codes
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/codes)](https://pkg.go.dev/go.opentelemetry.io/otel/codes)

105
vendor/go.opentelemetry.io/otel/codes/codes.go generated vendored Normal file
View File

@ -0,0 +1,105 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package codes // import "go.opentelemetry.io/otel/codes"
import (
"encoding/json"
"fmt"
"strconv"
)
const (
// Unset is the default status code.
Unset Code = 0
// Error indicates the operation contains an error.
//
// NOTE: The error code in OTLP is 2.
// The value of this enum is only relevant to the internals
// of the Go SDK.
Error Code = 1
// Ok indicates operation has been validated by an Application developers
// or Operator to have completed successfully, or contain no error.
//
// NOTE: The Ok code in OTLP is 1.
// The value of this enum is only relevant to the internals
// of the Go SDK.
Ok Code = 2
maxCode = 3
)
// Code is an 32-bit representation of a status state.
type Code uint32
var codeToStr = map[Code]string{
Unset: "Unset",
Error: "Error",
Ok: "Ok",
}
var strToCode = map[string]Code{
`"Unset"`: Unset,
`"Error"`: Error,
`"Ok"`: Ok,
}
// String returns the Code as a string.
func (c Code) String() string {
return codeToStr[c]
}
// UnmarshalJSON unmarshals b into the Code.
//
// This is based on the functionality in the gRPC codes package:
// https://github.com/grpc/grpc-go/blob/bb64fee312b46ebee26be43364a7a966033521b1/codes/codes.go#L218-L244
func (c *Code) UnmarshalJSON(b []byte) error {
// From json.Unmarshaler: By convention, to approximate the behavior of
// Unmarshal itself, Unmarshalers implement UnmarshalJSON([]byte("null")) as
// a no-op.
if string(b) == "null" {
return nil
}
if c == nil {
return fmt.Errorf("nil receiver passed to UnmarshalJSON")
}
var x interface{}
if err := json.Unmarshal(b, &x); err != nil {
return err
}
switch x.(type) {
case string:
if jc, ok := strToCode[string(b)]; ok {
*c = jc
return nil
}
return fmt.Errorf("invalid code: %q", string(b))
case float64:
if ci, err := strconv.ParseUint(string(b), 10, 32); err == nil {
if ci >= maxCode {
return fmt.Errorf("invalid code: %q", ci)
}
*c = Code(ci)
return nil
}
return fmt.Errorf("invalid code: %q", string(b))
default:
return fmt.Errorf("invalid code: %q", string(b))
}
}
// MarshalJSON returns c as the JSON encoding of c.
func (c *Code) MarshalJSON() ([]byte, error) {
if c == nil {
return []byte("null"), nil
}
str, ok := codeToStr[*c]
if !ok {
return nil, fmt.Errorf("invalid code: %d", *c)
}
return []byte(fmt.Sprintf("%q", str)), nil
}

10
vendor/go.opentelemetry.io/otel/codes/doc.go generated vendored Normal file
View File

@ -0,0 +1,10 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package codes defines the canonical error codes used by OpenTelemetry.
It conforms to [the OpenTelemetry
specification](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.20.0/specification/trace/api.md#set-status).
*/
package codes // import "go.opentelemetry.io/otel/codes"

23
vendor/go.opentelemetry.io/otel/doc.go generated vendored Normal file
View File

@ -0,0 +1,23 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package otel provides global access to the OpenTelemetry API. The subpackages of
the otel package provide an implementation of the OpenTelemetry API.
The provided API is used to instrument code and measure data about that code's
performance and operation. The measured data, by default, is not processed or
transmitted anywhere. An implementation of the OpenTelemetry SDK, like the
default SDK implementation (go.opentelemetry.io/otel/sdk), and associated
exporters are used to process and transport this data.
To read the getting started guide, see https://opentelemetry.io/docs/languages/go/getting-started/.
To read more about tracing, see go.opentelemetry.io/otel/trace.
To read more about metrics, see go.opentelemetry.io/otel/metric.
To read more about propagation, see go.opentelemetry.io/otel/propagation and
go.opentelemetry.io/otel/baggage.
*/
package otel // import "go.opentelemetry.io/otel"

27
vendor/go.opentelemetry.io/otel/error_handler.go generated vendored Normal file
View File

@ -0,0 +1,27 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otel // import "go.opentelemetry.io/otel"
// ErrorHandler handles irremediable events.
type ErrorHandler interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Handle handles any error deemed irremediable by an OpenTelemetry
// component.
Handle(error)
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
// ErrorHandlerFunc is a convenience adapter to allow the use of a function
// as an ErrorHandler.
type ErrorHandlerFunc func(error)
var _ ErrorHandler = ErrorHandlerFunc(nil)
// Handle handles the irremediable error by calling the ErrorHandlerFunc itself.
func (f ErrorHandlerFunc) Handle(err error) {
f(err)
}

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,3 @@
# OTLP Metric gRPC Exporter
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc)](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc)

View File

@ -0,0 +1,200 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpmetricgrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc"
import (
"context"
"time"
"google.golang.org/genproto/googleapis/rpc/errdetails"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/oconf"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/retry"
colmetricpb "go.opentelemetry.io/proto/otlp/collector/metrics/v1"
metricpb "go.opentelemetry.io/proto/otlp/metrics/v1"
)
type client struct {
metadata metadata.MD
exportTimeout time.Duration
requestFunc retry.RequestFunc
// ourConn keeps track of where conn was created: true if created here in
// NewClient, or false if passed with an option. This is important on
// Shutdown as the conn should only be closed if we created it. Otherwise,
// it is up to the processes that passed the conn to close it.
ourConn bool
conn *grpc.ClientConn
msc colmetricpb.MetricsServiceClient
}
// newClient creates a new gRPC metric client.
func newClient(_ context.Context, cfg oconf.Config) (*client, error) {
c := &client{
exportTimeout: cfg.Metrics.Timeout,
requestFunc: cfg.RetryConfig.RequestFunc(retryable),
conn: cfg.GRPCConn,
}
if len(cfg.Metrics.Headers) > 0 {
c.metadata = metadata.New(cfg.Metrics.Headers)
}
if c.conn == nil {
// If the caller did not provide a ClientConn when the client was
// created, create one using the configuration they did provide.
userAgent := "OTel Go OTLP over gRPC metrics exporter/" + Version()
dialOpts := []grpc.DialOption{grpc.WithUserAgent(userAgent)}
dialOpts = append(dialOpts, cfg.DialOptions...)
conn, err := grpc.NewClient(cfg.Metrics.Endpoint, dialOpts...)
if err != nil {
return nil, err
}
// Keep track that we own the lifecycle of this conn and need to close
// it on Shutdown.
c.ourConn = true
c.conn = conn
}
c.msc = colmetricpb.NewMetricsServiceClient(c.conn)
return c, nil
}
// Shutdown shuts down the client, freeing all resource.
//
// Any active connections to a remote endpoint are closed if they were created
// by the client. Any gRPC connection passed during creation using
// WithGRPCConn will not be closed. It is the caller's responsibility to
// handle cleanup of that resource.
func (c *client) Shutdown(ctx context.Context) error {
// The otlpmetric.Exporter synchronizes access to client methods and
// ensures this is called only once. The only thing that needs to be done
// here is to release any computational resources the client holds.
c.metadata = nil
c.requestFunc = nil
c.msc = nil
err := ctx.Err()
if c.ourConn {
closeErr := c.conn.Close()
// A context timeout error takes precedence over this error.
if err == nil && closeErr != nil {
err = closeErr
}
}
c.conn = nil
return err
}
// UploadMetrics sends protoMetrics to connected endpoint.
//
// Retryable errors from the server will be handled according to any
// RetryConfig the client was created with.
func (c *client) UploadMetrics(ctx context.Context, protoMetrics *metricpb.ResourceMetrics) error {
// The otlpmetric.Exporter synchronizes access to client methods, and
// ensures this is not called after the Exporter is shutdown. Only thing
// to do here is send data.
select {
case <-ctx.Done():
// Do not upload if the context is already expired.
return ctx.Err()
default:
}
ctx, cancel := c.exportContext(ctx)
defer cancel()
return c.requestFunc(ctx, func(iCtx context.Context) error {
resp, err := c.msc.Export(iCtx, &colmetricpb.ExportMetricsServiceRequest{
ResourceMetrics: []*metricpb.ResourceMetrics{protoMetrics},
})
if resp != nil && resp.PartialSuccess != nil {
msg := resp.PartialSuccess.GetErrorMessage()
n := resp.PartialSuccess.GetRejectedDataPoints()
if n != 0 || msg != "" {
err := internal.MetricPartialSuccessError(n, msg)
otel.Handle(err)
}
}
// nil is converted to OK.
if status.Code(err) == codes.OK {
// Success.
return nil
}
return err
})
}
// exportContext returns a copy of parent with an appropriate deadline and
// cancellation function based on the clients configured export timeout.
//
// It is the callers responsibility to cancel the returned context once its
// use is complete, via the parent or directly with the returned CancelFunc, to
// ensure all resources are correctly released.
func (c *client) exportContext(parent context.Context) (context.Context, context.CancelFunc) {
var (
ctx context.Context
cancel context.CancelFunc
)
if c.exportTimeout > 0 {
ctx, cancel = context.WithTimeout(parent, c.exportTimeout)
} else {
ctx, cancel = context.WithCancel(parent)
}
if c.metadata.Len() > 0 {
ctx = metadata.NewOutgoingContext(ctx, c.metadata)
}
return ctx, cancel
}
// retryable returns if err identifies a request that can be retried and a
// duration to wait for if an explicit throttle time is included in err.
func retryable(err error) (bool, time.Duration) {
s := status.Convert(err)
return retryableGRPCStatus(s)
}
func retryableGRPCStatus(s *status.Status) (bool, time.Duration) {
switch s.Code() {
case codes.Canceled,
codes.DeadlineExceeded,
codes.Aborted,
codes.OutOfRange,
codes.Unavailable,
codes.DataLoss:
// Additionally, handle RetryInfo.
_, d := throttleDelay(s)
return true, d
case codes.ResourceExhausted:
// Retry only if the server signals that the recovery from resource exhaustion is possible.
return throttleDelay(s)
}
// Not a retry-able error.
return false, 0
}
// throttleDelay returns if the status is RetryInfo
// and the duration to wait for if an explicit throttle time is included.
func throttleDelay(s *status.Status) (bool, time.Duration) {
for _, detail := range s.Details() {
if t, ok := detail.(*errdetails.RetryInfo); ok {
return true, t.RetryDelay.AsDuration()
}
}
return false, 0
}

View File

@ -0,0 +1,264 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpmetricgrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc"
import (
"fmt"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/oconf"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/retry"
"go.opentelemetry.io/otel/sdk/metric"
)
// Option applies a configuration option to the Exporter.
type Option interface {
applyGRPCOption(oconf.Config) oconf.Config
}
func asGRPCOptions(opts []Option) []oconf.GRPCOption {
converted := make([]oconf.GRPCOption, len(opts))
for i, o := range opts {
converted[i] = oconf.NewGRPCOption(o.applyGRPCOption)
}
return converted
}
// RetryConfig defines configuration for retrying the export of metric data
// that failed.
//
// This configuration does not define any network retry strategy. That is
// entirely handled by the gRPC ClientConn.
type RetryConfig retry.Config
type wrappedOption struct {
oconf.GRPCOption
}
func (w wrappedOption) applyGRPCOption(cfg oconf.Config) oconf.Config {
return w.ApplyGRPCOption(cfg)
}
// WithInsecure disables client transport security for the Exporter's gRPC
// connection, just like grpc.WithInsecure()
// (https://pkg.go.dev/google.golang.org/grpc#WithInsecure) does.
//
// If the OTEL_EXPORTER_OTLP_ENDPOINT or OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
// environment variable is set, and this option is not passed, that variable
// value will be used to determine client security. If the endpoint has a
// scheme of "http" or "unix" client security will be disabled. If both are
// set, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT will take precedence.
//
// By default, if an environment variable is not set, and this option is not
// passed, client security will be used.
//
// This option has no effect if WithGRPCConn is used.
func WithInsecure() Option {
return wrappedOption{oconf.WithInsecure()}
}
// WithEndpoint sets the target endpoint the Exporter will connect to.
//
// If the OTEL_EXPORTER_OTLP_ENDPOINT or OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
// environment variable is set, and this option is not passed, that variable
// value will be used. If both are set, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
// will take precedence.
//
// If both this option and WithEndpointURL are used, the last used option will
// take precedence.
//
// By default, if an environment variable is not set, and this option is not
// passed, "localhost:4317" will be used.
//
// This option has no effect if WithGRPCConn is used.
func WithEndpoint(endpoint string) Option {
return wrappedOption{oconf.WithEndpoint(endpoint)}
}
// WithEndpointURL sets the target endpoint URL the Exporter will connect to.
//
// If the OTEL_EXPORTER_OTLP_ENDPOINT or OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
// environment variable is set, and this option is not passed, that variable
// value will be used. If both are set, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
// will take precedence.
//
// If both this option and WithEndpoint are used, the last used option will
// take precedence.
//
// If an invalid URL is provided, the default value will be kept.
//
// By default, if an environment variable is not set, and this option is not
// passed, "localhost:4317" will be used.
//
// This option has no effect if WithGRPCConn is used.
func WithEndpointURL(u string) Option {
return wrappedOption{oconf.WithEndpointURL(u)}
}
// WithReconnectionPeriod set the minimum amount of time between connection
// attempts to the target endpoint.
//
// This option has no effect if WithGRPCConn is used.
func WithReconnectionPeriod(rp time.Duration) Option {
return wrappedOption{oconf.NewGRPCOption(func(cfg oconf.Config) oconf.Config {
cfg.ReconnectionPeriod = rp
return cfg
})}
}
func compressorToCompression(compressor string) oconf.Compression {
if compressor == "gzip" {
return oconf.GzipCompression
}
otel.Handle(fmt.Errorf("invalid compression type: '%s', using no compression as default", compressor))
return oconf.NoCompression
}
// WithCompressor sets the compressor the gRPC client uses.
// Supported compressor values: "gzip".
//
// If the OTEL_EXPORTER_OTLP_COMPRESSION or
// OTEL_EXPORTER_OTLP_METRICS_COMPRESSION environment variable is set, and
// this option is not passed, that variable value will be used. That value can
// be either "none" or "gzip". If both are set,
// OTEL_EXPORTER_OTLP_METRICS_COMPRESSION will take precedence.
//
// By default, if an environment variable is not set, and this option is not
// passed, no compressor will be used.
//
// This option has no effect if WithGRPCConn is used.
func WithCompressor(compressor string) Option {
return wrappedOption{oconf.WithCompression(compressorToCompression(compressor))}
}
// WithHeaders will send the provided headers with each gRPC requests.
//
// If the OTEL_EXPORTER_OTLP_HEADERS or OTEL_EXPORTER_OTLP_METRICS_HEADERS
// environment variable is set, and this option is not passed, that variable
// value will be used. The value will be parsed as a list of key value pairs.
// These pairs are expected to be in the W3C Correlation-Context format
// without additional semi-colon delimited metadata (i.e. "k1=v1,k2=v2"). If
// both are set, OTEL_EXPORTER_OTLP_METRICS_HEADERS will take precedence.
//
// By default, if an environment variable is not set, and this option is not
// passed, no user headers will be set.
func WithHeaders(headers map[string]string) Option {
return wrappedOption{oconf.WithHeaders(headers)}
}
// WithTLSCredentials sets the gRPC connection to use creds.
//
// If the OTEL_EXPORTER_OTLP_CERTIFICATE or
// OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE environment variable is set, and
// this option is not passed, that variable value will be used. The value will
// be parsed the filepath of the TLS certificate chain to use. If both are
// set, OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE will take precedence.
//
// By default, if an environment variable is not set, and this option is not
// passed, no TLS credentials will be used.
//
// This option has no effect if WithGRPCConn is used.
func WithTLSCredentials(creds credentials.TransportCredentials) Option {
return wrappedOption{oconf.NewGRPCOption(func(cfg oconf.Config) oconf.Config {
cfg.Metrics.GRPCCredentials = creds
return cfg
})}
}
// WithServiceConfig defines the default gRPC service config used.
//
// This option has no effect if WithGRPCConn is used.
func WithServiceConfig(serviceConfig string) Option {
return wrappedOption{oconf.NewGRPCOption(func(cfg oconf.Config) oconf.Config {
cfg.ServiceConfig = serviceConfig
return cfg
})}
}
// WithDialOption sets explicit grpc.DialOptions to use when establishing a
// gRPC connection. The options here are appended to the internal grpc.DialOptions
// used so they will take precedence over any other internal grpc.DialOptions
// they might conflict with.
// The [grpc.WithBlock], [grpc.WithTimeout], and [grpc.WithReturnConnectionError]
// grpc.DialOptions are ignored.
//
// This option has no effect if WithGRPCConn is used.
func WithDialOption(opts ...grpc.DialOption) Option {
return wrappedOption{oconf.NewGRPCOption(func(cfg oconf.Config) oconf.Config {
cfg.DialOptions = opts
return cfg
})}
}
// WithGRPCConn sets conn as the gRPC ClientConn used for all communication.
//
// This option takes precedence over any other option that relates to
// establishing or persisting a gRPC connection to a target endpoint. Any
// other option of those types passed will be ignored.
//
// It is the callers responsibility to close the passed conn. The Exporter
// Shutdown method will not close this connection.
func WithGRPCConn(conn *grpc.ClientConn) Option {
return wrappedOption{oconf.NewGRPCOption(func(cfg oconf.Config) oconf.Config {
cfg.GRPCConn = conn
return cfg
})}
}
// WithTimeout sets the max amount of time an Exporter will attempt an export.
//
// This takes precedence over any retry settings defined by WithRetry. Once
// this time limit has been reached the export is abandoned and the metric
// data is dropped.
//
// If the OTEL_EXPORTER_OTLP_TIMEOUT or OTEL_EXPORTER_OTLP_METRICS_TIMEOUT
// environment variable is set, and this option is not passed, that variable
// value will be used. The value will be parsed as an integer representing the
// timeout in milliseconds. If both are set,
// OTEL_EXPORTER_OTLP_METRICS_TIMEOUT will take precedence.
//
// By default, if an environment variable is not set, and this option is not
// passed, a timeout of 10 seconds will be used.
func WithTimeout(duration time.Duration) Option {
return wrappedOption{oconf.WithTimeout(duration)}
}
// WithRetry sets the retry policy for transient retryable errors that are
// returned by the target endpoint.
//
// If the target endpoint responds with not only a retryable error, but
// explicitly returns a backoff time in the response, that time will take
// precedence over these settings.
//
// These settings do not define any network retry strategy. That is entirely
// handled by the gRPC ClientConn.
//
// If unset, the default retry policy will be used. It will retry the export
// 5 seconds after receiving a retryable error and increase exponentially
// after each error for no more than a total time of 1 minute.
func WithRetry(settings RetryConfig) Option {
return wrappedOption{oconf.WithRetry(retry.Config(settings))}
}
// WithTemporalitySelector sets the TemporalitySelector the client will use to
// determine the Temporality of an instrument based on its kind. If this option
// is not used, the client will use the DefaultTemporalitySelector from the
// go.opentelemetry.io/otel/sdk/metric package.
func WithTemporalitySelector(selector metric.TemporalitySelector) Option {
return wrappedOption{oconf.WithTemporalitySelector(selector)}
}
// WithAggregationSelector sets the AggregationSelector the client will use to
// determine the aggregation to use for an instrument based on its kind. If
// this option is not used, the reader will use the DefaultAggregationSelector
// from the go.opentelemetry.io/otel/sdk/metric package, or the aggregation
// explicitly passed for a view matching an instrument.
func WithAggregationSelector(selector metric.AggregationSelector) Option {
return wrappedOption{oconf.WithAggregationSelector(selector)}
}

View File

@ -0,0 +1,85 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package otlpmetricgrpc provides an OTLP metrics exporter using gRPC.
By default the telemetry is sent to https://localhost:4317.
Exporter should be created using [New] and used with a [metric.PeriodicReader].
The environment variables described below can be used for configuration.
OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT (default: "https://localhost:4317") -
target to which the exporter sends telemetry.
The target syntax is defined in https://github.com/grpc/grpc/blob/master/doc/naming.md.
The value must contain a host.
The value may additionally a port, a scheme, and a path.
The value accepts "http" and "https" scheme.
The value should not contain a query string or fragment.
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT takes precedence over OTEL_EXPORTER_OTLP_ENDPOINT.
The configuration can be overridden by [WithEndpoint], [WithEndpointURL], [WithInsecure], and [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_INSECURE, OTEL_EXPORTER_OTLP_METRICS_INSECURE (default: "false") -
setting "true" disables client transport security for the exporter's gRPC connection.
You can use this only when an endpoint is provided without the http or https scheme.
OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT setting overrides
the scheme defined via OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT.
OTEL_EXPORTER_OTLP_METRICS_INSECURE takes precedence over OTEL_EXPORTER_OTLP_INSECURE.
The configuration can be overridden by [WithInsecure], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_HEADERS, OTEL_EXPORTER_OTLP_METRICS_HEADERS (default: none) -
key-value pairs used as gRPC metadata associated with gRPC requests.
The value is expected to be represented in a format matching the [W3C Baggage HTTP Header Content Format],
except that additional semi-colon delimited metadata is not supported.
Example value: "key1=value1,key2=value2".
OTEL_EXPORTER_OTLP_METRICS_HEADERS takes precedence over OTEL_EXPORTER_OTLP_HEADERS.
The configuration can be overridden by [WithHeaders] option.
OTEL_EXPORTER_OTLP_TIMEOUT, OTEL_EXPORTER_OTLP_METRICS_TIMEOUT (default: "10000") -
maximum time in milliseconds the OTLP exporter waits for each batch export.
OTEL_EXPORTER_OTLP_METRICS_TIMEOUT takes precedence over OTEL_EXPORTER_OTLP_TIMEOUT.
The configuration can be overridden by [WithTimeout] option.
OTEL_EXPORTER_OTLP_COMPRESSION, OTEL_EXPORTER_OTLP_METRICS_COMPRESSION (default: none) -
the gRPC compressor the exporter uses.
Supported value: "gzip".
OTEL_EXPORTER_OTLP_METRICS_COMPRESSION takes precedence over OTEL_EXPORTER_OTLP_COMPRESSION.
The configuration can be overridden by [WithCompressor], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE (default: none) -
the filepath to the trusted certificate to use when verifying a server's TLS credentials.
OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CERTIFICATE.
The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE (default: none) -
the filepath to the client certificate/chain trust for client's private key to use in mTLS communication in PEM format.
OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE.
The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_CLIENT_KEY, OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY (default: none) -
the filepath to the client's private key to use in mTLS communication in PEM format.
OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY takes precedence over OTEL_EXPORTER_OTLP_CLIENT_KEY.
The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] option.
OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE (default: "cumulative") -
aggregation temporality to use on the basis of instrument kind. Supported values:
- "cumulative" - Cumulative aggregation temporality for all instrument kinds,
- "delta" - Delta aggregation temporality for Counter, Asynchronous Counter and Histogram instrument kinds;
Cumulative aggregation for UpDownCounter and Asynchronous UpDownCounter instrument kinds,
- "lowmemory" - Delta aggregation temporality for Synchronous Counter and Histogram instrument kinds;
Cumulative aggregation temporality for Synchronous UpDownCounter, Asynchronous Counter, and Asynchronous UpDownCounter instrument kinds.
The configuration can be overridden by [WithTemporalitySelector] option.
OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION (default: "explicit_bucket_histogram") -
default aggregation to use for histogram instruments. Supported values:
- "explicit_bucket_histogram" - [Explicit Bucket Histogram Aggregation],
- "base2_exponential_bucket_histogram" - [Base2 Exponential Bucket Histogram Aggregation].
The configuration can be overridden by [WithAggregationSelector] option.
[W3C Baggage HTTP Header Content Format]: https://www.w3.org/TR/baggage/#header-content
[Explicit Bucket Histogram Aggregation]: https://github.com/open-telemetry/opentelemetry-specification/blob/v1.26.0/specification/metrics/sdk.md#explicit-bucket-histogram-aggregation
[Base2 Exponential Bucket Histogram Aggregation]: https://github.com/open-telemetry/opentelemetry-specification/blob/v1.26.0/specification/metrics/sdk.md#base2-exponential-bucket-histogram-aggregation
*/
package otlpmetricgrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc"

View File

@ -0,0 +1,156 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpmetricgrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc"
import (
"context"
"fmt"
"sync"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/oconf"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/transform"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/sdk/metric"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
metricpb "go.opentelemetry.io/proto/otlp/metrics/v1"
)
// Exporter is a OpenTelemetry metric Exporter using gRPC.
type Exporter struct {
// Ensure synchronous access to the client across all functionality.
clientMu sync.Mutex
client interface {
UploadMetrics(context.Context, *metricpb.ResourceMetrics) error
Shutdown(context.Context) error
}
temporalitySelector metric.TemporalitySelector
aggregationSelector metric.AggregationSelector
shutdownOnce sync.Once
}
func newExporter(c *client, cfg oconf.Config) (*Exporter, error) {
ts := cfg.Metrics.TemporalitySelector
if ts == nil {
ts = func(metric.InstrumentKind) metricdata.Temporality {
return metricdata.CumulativeTemporality
}
}
as := cfg.Metrics.AggregationSelector
if as == nil {
as = metric.DefaultAggregationSelector
}
return &Exporter{
client: c,
temporalitySelector: ts,
aggregationSelector: as,
}, nil
}
// Temporality returns the Temporality to use for an instrument kind.
func (e *Exporter) Temporality(k metric.InstrumentKind) metricdata.Temporality {
return e.temporalitySelector(k)
}
// Aggregation returns the Aggregation to use for an instrument kind.
func (e *Exporter) Aggregation(k metric.InstrumentKind) metric.Aggregation {
return e.aggregationSelector(k)
}
// Export transforms and transmits metric data to an OTLP receiver.
//
// This method returns an error if called after Shutdown.
// This method returns an error if the method is canceled by the passed context.
func (e *Exporter) Export(ctx context.Context, rm *metricdata.ResourceMetrics) error {
defer global.Debug("OTLP/gRPC exporter export", "Data", rm)
otlpRm, err := transform.ResourceMetrics(rm)
// Best effort upload of transformable metrics.
e.clientMu.Lock()
upErr := e.client.UploadMetrics(ctx, otlpRm)
e.clientMu.Unlock()
if upErr != nil {
if err == nil {
return fmt.Errorf("failed to upload metrics: %w", upErr)
}
// Merge the two errors.
return fmt.Errorf("failed to upload incomplete metrics (%w): %w", err, upErr)
}
return err
}
// ForceFlush flushes any metric data held by an exporter.
//
// This method returns an error if called after Shutdown.
// This method returns an error if the method is canceled by the passed context.
//
// This method is safe to call concurrently.
func (e *Exporter) ForceFlush(ctx context.Context) error {
// The exporter and client hold no state, nothing to flush.
return ctx.Err()
}
// Shutdown flushes all metric data held by an exporter and releases any held
// computational resources.
//
// This method returns an error if called after Shutdown.
// This method returns an error if the method is canceled by the passed context.
//
// This method is safe to call concurrently.
func (e *Exporter) Shutdown(ctx context.Context) error {
err := errShutdown
e.shutdownOnce.Do(func() {
e.clientMu.Lock()
client := e.client
e.client = shutdownClient{}
e.clientMu.Unlock()
err = client.Shutdown(ctx)
})
return err
}
var errShutdown = fmt.Errorf("gRPC exporter is shutdown")
type shutdownClient struct{}
func (c shutdownClient) err(ctx context.Context) error {
if err := ctx.Err(); err != nil {
return err
}
return errShutdown
}
func (c shutdownClient) UploadMetrics(ctx context.Context, _ *metricpb.ResourceMetrics) error {
return c.err(ctx)
}
func (c shutdownClient) Shutdown(ctx context.Context) error {
return c.err(ctx)
}
// MarshalLog returns logging data about the Exporter.
func (e *Exporter) MarshalLog() interface{} {
return struct{ Type string }{Type: "OTLP/gRPC"}
}
// New returns an OpenTelemetry metric Exporter. The Exporter can be used with
// a PeriodicReader to export OpenTelemetry metric data to an OTLP receiving
// endpoint using gRPC.
//
// If an already established gRPC ClientConn is not passed in options using
// WithGRPCConn, a connection to the OTLP endpoint will be established based
// on options. If a connection cannot be establishes in the lifetime of ctx,
// an error will be returned.
func New(ctx context.Context, options ...Option) (*Exporter, error) {
cfg := oconf.NewGRPCConfig(asGRPCOptions(options)...)
c, err := newClient(ctx, cfg)
if err != nil {
return nil, err
}
return newExporter(c, cfg)
}

View File

@ -0,0 +1,191 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/envconfig/envconfig.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package envconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/envconfig"
import (
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"net/url"
"strconv"
"strings"
"time"
"go.opentelemetry.io/otel/internal/global"
)
// ConfigFn is the generic function used to set a config.
type ConfigFn func(*EnvOptionsReader)
// EnvOptionsReader reads the required environment variables.
type EnvOptionsReader struct {
GetEnv func(string) string
ReadFile func(string) ([]byte, error)
Namespace string
}
// Apply runs every ConfigFn.
func (e *EnvOptionsReader) Apply(opts ...ConfigFn) {
for _, o := range opts {
o(e)
}
}
// GetEnvValue gets an OTLP environment variable value of the specified key
// using the GetEnv function.
// This function prepends the OTLP specified namespace to all key lookups.
func (e *EnvOptionsReader) GetEnvValue(key string) (string, bool) {
v := strings.TrimSpace(e.GetEnv(keyWithNamespace(e.Namespace, key)))
return v, v != ""
}
// WithString retrieves the specified config and passes it to ConfigFn as a string.
func WithString(n string, fn func(string)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
fn(v)
}
}
}
// WithBool returns a ConfigFn that reads the environment variable n and if it exists passes its parsed bool value to fn.
func WithBool(n string, fn func(bool)) ConfigFn {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
b := strings.ToLower(v) == "true"
fn(b)
}
}
}
// WithDuration retrieves the specified config and passes it to ConfigFn as a duration.
func WithDuration(n string, fn func(time.Duration)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
d, err := strconv.Atoi(v)
if err != nil {
global.Error(err, "parse duration", "input", v)
return
}
fn(time.Duration(d) * time.Millisecond)
}
}
}
// WithHeaders retrieves the specified config and passes it to ConfigFn as a map of HTTP headers.
func WithHeaders(n string, fn func(map[string]string)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
fn(stringToHeader(v))
}
}
}
// WithURL retrieves the specified config and passes it to ConfigFn as a net/url.URL.
func WithURL(n string, fn func(*url.URL)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
u, err := url.Parse(v)
if err != nil {
global.Error(err, "parse url", "input", v)
return
}
fn(u)
}
}
}
// WithCertPool returns a ConfigFn that reads the environment variable n as a filepath to a TLS certificate pool. If it exists, it is parsed as a crypto/x509.CertPool and it is passed to fn.
func WithCertPool(n string, fn func(*x509.CertPool)) ConfigFn {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
b, err := e.ReadFile(v)
if err != nil {
global.Error(err, "read tls ca cert file", "file", v)
return
}
c, err := createCertPool(b)
if err != nil {
global.Error(err, "create tls cert pool")
return
}
fn(c)
}
}
}
// WithClientCert returns a ConfigFn that reads the environment variable nc and nk as filepaths to a client certificate and key pair. If they exists, they are parsed as a crypto/tls.Certificate and it is passed to fn.
func WithClientCert(nc, nk string, fn func(tls.Certificate)) ConfigFn {
return func(e *EnvOptionsReader) {
vc, okc := e.GetEnvValue(nc)
vk, okk := e.GetEnvValue(nk)
if !okc || !okk {
return
}
cert, err := e.ReadFile(vc)
if err != nil {
global.Error(err, "read tls client cert", "file", vc)
return
}
key, err := e.ReadFile(vk)
if err != nil {
global.Error(err, "read tls client key", "file", vk)
return
}
crt, err := tls.X509KeyPair(cert, key)
if err != nil {
global.Error(err, "create tls client key pair")
return
}
fn(crt)
}
}
func keyWithNamespace(ns, key string) string {
if ns == "" {
return key
}
return fmt.Sprintf("%s_%s", ns, key)
}
func stringToHeader(value string) map[string]string {
headersPairs := strings.Split(value, ",")
headers := make(map[string]string)
for _, header := range headersPairs {
n, v, found := strings.Cut(header, "=")
if !found {
global.Error(errors.New("missing '="), "parse headers", "input", header)
continue
}
name, err := url.PathUnescape(n)
if err != nil {
global.Error(err, "escape header key", "key", n)
continue
}
trimmedName := strings.TrimSpace(name)
value, err := url.PathUnescape(v)
if err != nil {
global.Error(err, "escape header value", "value", v)
continue
}
trimmedValue := strings.TrimSpace(value)
headers[trimmedName] = trimmedValue
}
return headers
}
func createCertPool(certBytes []byte) (*x509.CertPool, error) {
cp := x509.NewCertPool()
if ok := cp.AppendCertsFromPEM(certBytes); !ok {
return nil, errors.New("failed to append certificate to the cert pool")
}
return cp, nil
}

View File

@ -0,0 +1,31 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package internal // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal"
//go:generate gotmpl --body=../../../../../internal/shared/otlp/partialsuccess.go.tmpl "--data={}" --out=partialsuccess.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/partialsuccess_test.go.tmpl "--data={}" --out=partialsuccess_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/retry/retry.go.tmpl "--data={}" --out=retry/retry.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/retry/retry_test.go.tmpl "--data={}" --out=retry/retry_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/envconfig/envconfig.go.tmpl "--data={}" --out=envconfig/envconfig.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/envconfig/envconfig_test.go.tmpl "--data={}" --out=envconfig/envconfig_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/oconf/envconfig.go.tmpl "--data={\"envconfigImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/envconfig\"}" --out=oconf/envconfig.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/oconf/envconfig_test.go.tmpl "--data={}" --out=oconf/envconfig_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/oconf/options.go.tmpl "--data={\"retryImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/retry\"}" --out=oconf/options.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/oconf/options_test.go.tmpl "--data={\"envconfigImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/envconfig\"}" --out=oconf/options_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/oconf/optiontypes.go.tmpl "--data={}" --out=oconf/optiontypes.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/oconf/tls.go.tmpl "--data={}" --out=oconf/tls.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/otest/client.go.tmpl "--data={}" --out=otest/client.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/otest/client_test.go.tmpl "--data={\"internalImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal\"}" --out=otest/client_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/otest/collector.go.tmpl "--data={\"oconfImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/oconf\"}" --out=otest/collector.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/transform/attribute.go.tmpl "--data={}" --out=transform/attribute.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/transform/attribute_test.go.tmpl "--data={}" --out=transform/attribute_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/transform/error.go.tmpl "--data={}" --out=transform/error.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/transform/error_test.go.tmpl "--data={}" --out=transform/error_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/transform/metricdata.go.tmpl "--data={}" --out=transform/metricdata.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlpmetric/transform/metricdata_test.go.tmpl "--data={}" --out=transform/metricdata_test.go

View File

@ -0,0 +1,210 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlpmetric/oconf/envconfig.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package oconf // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/oconf"
import (
"crypto/tls"
"crypto/x509"
"net/url"
"os"
"path"
"strings"
"time"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/envconfig"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/sdk/metric"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
// DefaultEnvOptionsReader is the default environments reader.
var DefaultEnvOptionsReader = envconfig.EnvOptionsReader{
GetEnv: os.Getenv,
ReadFile: os.ReadFile,
Namespace: "OTEL_EXPORTER_OTLP",
}
// ApplyGRPCEnvConfigs applies the env configurations for gRPC.
func ApplyGRPCEnvConfigs(cfg Config) Config {
opts := getOptionsFromEnv()
for _, opt := range opts {
cfg = opt.ApplyGRPCOption(cfg)
}
return cfg
}
// ApplyHTTPEnvConfigs applies the env configurations for HTTP.
func ApplyHTTPEnvConfigs(cfg Config) Config {
opts := getOptionsFromEnv()
for _, opt := range opts {
cfg = opt.ApplyHTTPOption(cfg)
}
return cfg
}
func getOptionsFromEnv() []GenericOption {
opts := []GenericOption{}
tlsConf := &tls.Config{}
DefaultEnvOptionsReader.Apply(
envconfig.WithURL("ENDPOINT", func(u *url.URL) {
opts = append(opts, withEndpointScheme(u))
opts = append(opts, newSplitOption(func(cfg Config) Config {
cfg.Metrics.Endpoint = u.Host
// For OTLP/HTTP endpoint URLs without a per-signal
// configuration, the passed endpoint is used as a base URL
// and the signals are sent to these paths relative to that.
cfg.Metrics.URLPath = path.Join(u.Path, DefaultMetricsPath)
return cfg
}, withEndpointForGRPC(u)))
}),
envconfig.WithURL("METRICS_ENDPOINT", func(u *url.URL) {
opts = append(opts, withEndpointScheme(u))
opts = append(opts, newSplitOption(func(cfg Config) Config {
cfg.Metrics.Endpoint = u.Host
// For endpoint URLs for OTLP/HTTP per-signal variables, the
// URL MUST be used as-is without any modification. The only
// exception is that if an URL contains no path part, the root
// path / MUST be used.
path := u.Path
if path == "" {
path = "/"
}
cfg.Metrics.URLPath = path
return cfg
}, withEndpointForGRPC(u)))
}),
envconfig.WithCertPool("CERTIFICATE", func(p *x509.CertPool) { tlsConf.RootCAs = p }),
envconfig.WithCertPool("METRICS_CERTIFICATE", func(p *x509.CertPool) { tlsConf.RootCAs = p }),
envconfig.WithClientCert("CLIENT_CERTIFICATE", "CLIENT_KEY", func(c tls.Certificate) { tlsConf.Certificates = []tls.Certificate{c} }),
envconfig.WithClientCert("METRICS_CLIENT_CERTIFICATE", "METRICS_CLIENT_KEY", func(c tls.Certificate) { tlsConf.Certificates = []tls.Certificate{c} }),
envconfig.WithBool("INSECURE", func(b bool) { opts = append(opts, withInsecure(b)) }),
envconfig.WithBool("METRICS_INSECURE", func(b bool) { opts = append(opts, withInsecure(b)) }),
withTLSConfig(tlsConf, func(c *tls.Config) { opts = append(opts, WithTLSClientConfig(c)) }),
envconfig.WithHeaders("HEADERS", func(h map[string]string) { opts = append(opts, WithHeaders(h)) }),
envconfig.WithHeaders("METRICS_HEADERS", func(h map[string]string) { opts = append(opts, WithHeaders(h)) }),
WithEnvCompression("COMPRESSION", func(c Compression) { opts = append(opts, WithCompression(c)) }),
WithEnvCompression("METRICS_COMPRESSION", func(c Compression) { opts = append(opts, WithCompression(c)) }),
envconfig.WithDuration("TIMEOUT", func(d time.Duration) { opts = append(opts, WithTimeout(d)) }),
envconfig.WithDuration("METRICS_TIMEOUT", func(d time.Duration) { opts = append(opts, WithTimeout(d)) }),
withEnvTemporalityPreference("METRICS_TEMPORALITY_PREFERENCE", func(t metric.TemporalitySelector) { opts = append(opts, WithTemporalitySelector(t)) }),
withEnvAggPreference("METRICS_DEFAULT_HISTOGRAM_AGGREGATION", func(a metric.AggregationSelector) { opts = append(opts, WithAggregationSelector(a)) }),
)
return opts
}
func withEndpointForGRPC(u *url.URL) func(cfg Config) Config {
return func(cfg Config) Config {
// For OTLP/gRPC endpoints, this is the target to which the
// exporter is going to send telemetry.
cfg.Metrics.Endpoint = path.Join(u.Host, u.Path)
return cfg
}
}
// WithEnvCompression retrieves the specified config and passes it to ConfigFn as a Compression.
func WithEnvCompression(n string, fn func(Compression)) func(e *envconfig.EnvOptionsReader) {
return func(e *envconfig.EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
cp := NoCompression
if v == "gzip" {
cp = GzipCompression
}
fn(cp)
}
}
}
func withEndpointScheme(u *url.URL) GenericOption {
switch strings.ToLower(u.Scheme) {
case "http", "unix":
return WithInsecure()
default:
return WithSecure()
}
}
// revive:disable-next-line:flag-parameter
func withInsecure(b bool) GenericOption {
if b {
return WithInsecure()
}
return WithSecure()
}
func withTLSConfig(c *tls.Config, fn func(*tls.Config)) func(e *envconfig.EnvOptionsReader) {
return func(e *envconfig.EnvOptionsReader) {
if c.RootCAs != nil || len(c.Certificates) > 0 {
fn(c)
}
}
}
func withEnvTemporalityPreference(n string, fn func(metric.TemporalitySelector)) func(e *envconfig.EnvOptionsReader) {
return func(e *envconfig.EnvOptionsReader) {
if s, ok := e.GetEnvValue(n); ok {
switch strings.ToLower(s) {
case "cumulative":
fn(cumulativeTemporality)
case "delta":
fn(deltaTemporality)
case "lowmemory":
fn(lowMemory)
default:
global.Warn("OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE is set to an invalid value, ignoring.", "value", s)
}
}
}
}
func cumulativeTemporality(metric.InstrumentKind) metricdata.Temporality {
return metricdata.CumulativeTemporality
}
func deltaTemporality(ik metric.InstrumentKind) metricdata.Temporality {
switch ik {
case metric.InstrumentKindCounter, metric.InstrumentKindHistogram, metric.InstrumentKindObservableCounter:
return metricdata.DeltaTemporality
default:
return metricdata.CumulativeTemporality
}
}
func lowMemory(ik metric.InstrumentKind) metricdata.Temporality {
switch ik {
case metric.InstrumentKindCounter, metric.InstrumentKindHistogram:
return metricdata.DeltaTemporality
default:
return metricdata.CumulativeTemporality
}
}
func withEnvAggPreference(n string, fn func(metric.AggregationSelector)) func(e *envconfig.EnvOptionsReader) {
return func(e *envconfig.EnvOptionsReader) {
if s, ok := e.GetEnvValue(n); ok {
switch strings.ToLower(s) {
case "explicit_bucket_histogram":
fn(metric.DefaultAggregationSelector)
case "base2_exponential_bucket_histogram":
fn(func(kind metric.InstrumentKind) metric.Aggregation {
if kind == metric.InstrumentKindHistogram {
return metric.AggregationBase2ExponentialHistogram{
MaxSize: 160,
MaxScale: 20,
NoMinMax: false,
}
}
return metric.DefaultAggregationSelector(kind)
})
default:
global.Warn("OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION is set to an invalid value, ignoring.", "value", s)
}
}
}
}

View File

@ -0,0 +1,376 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlpmetric/oconf/options.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package oconf // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/oconf"
import (
"crypto/tls"
"fmt"
"net/http"
"net/url"
"path"
"strings"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/backoff"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/grpc/encoding/gzip"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/retry"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/sdk/metric"
)
const (
// DefaultMaxAttempts describes how many times the driver
// should retry the sending of the payload in case of a
// retryable error.
DefaultMaxAttempts int = 5
// DefaultMetricsPath is a default URL path for endpoint that
// receives metrics.
DefaultMetricsPath string = "/v1/metrics"
// DefaultBackoff is a default base backoff time used in the
// exponential backoff strategy.
DefaultBackoff time.Duration = 300 * time.Millisecond
// DefaultTimeout is a default max waiting time for the backend to process
// each span or metrics batch.
DefaultTimeout time.Duration = 10 * time.Second
)
type (
// HTTPTransportProxyFunc is a function that resolves which URL to use as proxy for a given request.
// This type is compatible with `http.Transport.Proxy` and can be used to set a custom proxy function to the OTLP HTTP client.
HTTPTransportProxyFunc func(*http.Request) (*url.URL, error)
SignalConfig struct {
Endpoint string
Insecure bool
TLSCfg *tls.Config
Headers map[string]string
Compression Compression
Timeout time.Duration
URLPath string
// gRPC configurations
GRPCCredentials credentials.TransportCredentials
TemporalitySelector metric.TemporalitySelector
AggregationSelector metric.AggregationSelector
Proxy HTTPTransportProxyFunc
}
Config struct {
// Signal specific configurations
Metrics SignalConfig
RetryConfig retry.Config
// gRPC configurations
ReconnectionPeriod time.Duration
ServiceConfig string
DialOptions []grpc.DialOption
GRPCConn *grpc.ClientConn
}
)
// NewHTTPConfig returns a new Config with all settings applied from opts and
// any unset setting using the default HTTP config values.
func NewHTTPConfig(opts ...HTTPOption) Config {
cfg := Config{
Metrics: SignalConfig{
Endpoint: fmt.Sprintf("%s:%d", DefaultCollectorHost, DefaultCollectorHTTPPort),
URLPath: DefaultMetricsPath,
Compression: NoCompression,
Timeout: DefaultTimeout,
TemporalitySelector: metric.DefaultTemporalitySelector,
AggregationSelector: metric.DefaultAggregationSelector,
},
RetryConfig: retry.DefaultConfig,
}
cfg = ApplyHTTPEnvConfigs(cfg)
for _, opt := range opts {
cfg = opt.ApplyHTTPOption(cfg)
}
cfg.Metrics.URLPath = cleanPath(cfg.Metrics.URLPath, DefaultMetricsPath)
return cfg
}
// cleanPath returns a path with all spaces trimmed and all redundancies
// removed. If urlPath is empty or cleaning it results in an empty string,
// defaultPath is returned instead.
func cleanPath(urlPath string, defaultPath string) string {
tmp := path.Clean(strings.TrimSpace(urlPath))
if tmp == "." {
return defaultPath
}
if !path.IsAbs(tmp) {
tmp = fmt.Sprintf("/%s", tmp)
}
return tmp
}
// NewGRPCConfig returns a new Config with all settings applied from opts and
// any unset setting using the default gRPC config values.
func NewGRPCConfig(opts ...GRPCOption) Config {
cfg := Config{
Metrics: SignalConfig{
Endpoint: fmt.Sprintf("%s:%d", DefaultCollectorHost, DefaultCollectorGRPCPort),
URLPath: DefaultMetricsPath,
Compression: NoCompression,
Timeout: DefaultTimeout,
TemporalitySelector: metric.DefaultTemporalitySelector,
AggregationSelector: metric.DefaultAggregationSelector,
},
RetryConfig: retry.DefaultConfig,
}
cfg = ApplyGRPCEnvConfigs(cfg)
for _, opt := range opts {
cfg = opt.ApplyGRPCOption(cfg)
}
if cfg.ServiceConfig != "" {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithDefaultServiceConfig(cfg.ServiceConfig))
}
// Priroritize GRPCCredentials over Insecure (passing both is an error).
if cfg.Metrics.GRPCCredentials != nil {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(cfg.Metrics.GRPCCredentials))
} else if cfg.Metrics.Insecure {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(insecure.NewCredentials()))
} else {
// Default to using the host's root CA.
creds := credentials.NewTLS(nil)
cfg.Metrics.GRPCCredentials = creds
cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(creds))
}
if cfg.Metrics.Compression == GzipCompression {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithDefaultCallOptions(grpc.UseCompressor(gzip.Name)))
}
if cfg.ReconnectionPeriod != 0 {
p := grpc.ConnectParams{
Backoff: backoff.DefaultConfig,
MinConnectTimeout: cfg.ReconnectionPeriod,
}
cfg.DialOptions = append(cfg.DialOptions, grpc.WithConnectParams(p))
}
return cfg
}
type (
// GenericOption applies an option to the HTTP or gRPC driver.
GenericOption interface {
ApplyHTTPOption(Config) Config
ApplyGRPCOption(Config) Config
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate compatibility.
private()
}
// HTTPOption applies an option to the HTTP driver.
HTTPOption interface {
ApplyHTTPOption(Config) Config
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate compatibility.
private()
}
// GRPCOption applies an option to the gRPC driver.
GRPCOption interface {
ApplyGRPCOption(Config) Config
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate compatibility.
private()
}
)
// genericOption is an option that applies the same logic
// for both gRPC and HTTP.
type genericOption struct {
fn func(Config) Config
}
func (g *genericOption) ApplyGRPCOption(cfg Config) Config {
return g.fn(cfg)
}
func (g *genericOption) ApplyHTTPOption(cfg Config) Config {
return g.fn(cfg)
}
func (genericOption) private() {}
func newGenericOption(fn func(cfg Config) Config) GenericOption {
return &genericOption{fn: fn}
}
// splitOption is an option that applies different logics
// for gRPC and HTTP.
type splitOption struct {
httpFn func(Config) Config
grpcFn func(Config) Config
}
func (g *splitOption) ApplyGRPCOption(cfg Config) Config {
return g.grpcFn(cfg)
}
func (g *splitOption) ApplyHTTPOption(cfg Config) Config {
return g.httpFn(cfg)
}
func (splitOption) private() {}
func newSplitOption(httpFn func(cfg Config) Config, grpcFn func(cfg Config) Config) GenericOption {
return &splitOption{httpFn: httpFn, grpcFn: grpcFn}
}
// httpOption is an option that is only applied to the HTTP driver.
type httpOption struct {
fn func(Config) Config
}
func (h *httpOption) ApplyHTTPOption(cfg Config) Config {
return h.fn(cfg)
}
func (httpOption) private() {}
func NewHTTPOption(fn func(cfg Config) Config) HTTPOption {
return &httpOption{fn: fn}
}
// grpcOption is an option that is only applied to the gRPC driver.
type grpcOption struct {
fn func(Config) Config
}
func (h *grpcOption) ApplyGRPCOption(cfg Config) Config {
return h.fn(cfg)
}
func (grpcOption) private() {}
func NewGRPCOption(fn func(cfg Config) Config) GRPCOption {
return &grpcOption{fn: fn}
}
// Generic Options
func WithEndpoint(endpoint string) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Metrics.Endpoint = endpoint
return cfg
})
}
func WithEndpointURL(v string) GenericOption {
return newGenericOption(func(cfg Config) Config {
u, err := url.Parse(v)
if err != nil {
global.Error(err, "otlpmetric: parse endpoint url", "url", v)
return cfg
}
cfg.Metrics.Endpoint = u.Host
cfg.Metrics.URLPath = u.Path
if u.Scheme != "https" {
cfg.Metrics.Insecure = true
}
return cfg
})
}
func WithCompression(compression Compression) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Metrics.Compression = compression
return cfg
})
}
func WithURLPath(urlPath string) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Metrics.URLPath = urlPath
return cfg
})
}
func WithRetry(rc retry.Config) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.RetryConfig = rc
return cfg
})
}
func WithTLSClientConfig(tlsCfg *tls.Config) GenericOption {
return newSplitOption(func(cfg Config) Config {
cfg.Metrics.TLSCfg = tlsCfg.Clone()
return cfg
}, func(cfg Config) Config {
cfg.Metrics.GRPCCredentials = credentials.NewTLS(tlsCfg)
return cfg
})
}
func WithInsecure() GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Metrics.Insecure = true
return cfg
})
}
func WithSecure() GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Metrics.Insecure = false
return cfg
})
}
func WithHeaders(headers map[string]string) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Metrics.Headers = headers
return cfg
})
}
func WithTimeout(duration time.Duration) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Metrics.Timeout = duration
return cfg
})
}
func WithTemporalitySelector(selector metric.TemporalitySelector) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Metrics.TemporalitySelector = selector
return cfg
})
}
func WithAggregationSelector(selector metric.AggregationSelector) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Metrics.AggregationSelector = selector
return cfg
})
}
func WithProxy(pf HTTPTransportProxyFunc) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Metrics.Proxy = pf
return cfg
})
}

View File

@ -0,0 +1,47 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlpmetric/oconf/optiontypes.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package oconf // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/oconf"
import "time"
const (
// DefaultCollectorGRPCPort is the default gRPC port of the collector.
DefaultCollectorGRPCPort uint16 = 4317
// DefaultCollectorHTTPPort is the default HTTP port of the collector.
DefaultCollectorHTTPPort uint16 = 4318
// DefaultCollectorHost is the host address the Exporter will attempt
// connect to if no collector address is provided.
DefaultCollectorHost string = "localhost"
)
// Compression describes the compression used for payloads sent to the
// collector.
type Compression int
const (
// NoCompression tells the driver to send payloads without
// compression.
NoCompression Compression = iota
// GzipCompression tells the driver to send payloads after
// compressing them with gzip.
GzipCompression
)
// RetrySettings defines configuration for retrying batches in case of export failure
// using an exponential backoff.
type RetrySettings struct {
// Enabled indicates whether to not retry sending batches in case of export failure.
Enabled bool
// InitialInterval the time to wait after the first failure before retrying.
InitialInterval time.Duration
// MaxInterval is the upper bound on backoff interval. Once this value is reached the delay between
// consecutive retries will always be `MaxInterval`.
MaxInterval time.Duration
// MaxElapsedTime is the maximum amount of time (including retries) spent trying to send a request/batch.
// Once this value is reached, the data is discarded.
MaxElapsedTime time.Duration
}

View File

@ -0,0 +1,38 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlpmetric/oconf/tls.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package oconf // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/oconf"
import (
"crypto/tls"
"crypto/x509"
"errors"
"os"
)
// ReadTLSConfigFromFile reads a PEM certificate file and creates
// a tls.Config that will use this certifate to verify a server certificate.
func ReadTLSConfigFromFile(path string) (*tls.Config, error) {
b, err := os.ReadFile(path)
if err != nil {
return nil, err
}
return CreateTLSConfig(b)
}
// CreateTLSConfig creates a tls.Config from a raw certificate bytes
// to verify a server certificate.
func CreateTLSConfig(certBytes []byte) (*tls.Config, error) {
cp := x509.NewCertPool()
if ok := cp.AppendCertsFromPEM(certBytes); !ok {
return nil, errors.New("failed to append certificate to the cert pool")
}
return &tls.Config{
RootCAs: cp,
}, nil
}

View File

@ -0,0 +1,56 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/partialsuccess.go
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package internal // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal"
import "fmt"
// PartialSuccess represents the underlying error for all handling
// OTLP partial success messages. Use `errors.Is(err,
// PartialSuccess{})` to test whether an error passed to the OTel
// error handler belongs to this category.
type PartialSuccess struct {
ErrorMessage string
RejectedItems int64
RejectedKind string
}
var _ error = PartialSuccess{}
// Error implements the error interface.
func (ps PartialSuccess) Error() string {
msg := ps.ErrorMessage
if msg == "" {
msg = "empty message"
}
return fmt.Sprintf("OTLP partial success: %s (%d %s rejected)", msg, ps.RejectedItems, ps.RejectedKind)
}
// Is supports the errors.Is() interface.
func (ps PartialSuccess) Is(err error) bool {
_, ok := err.(PartialSuccess)
return ok
}
// TracePartialSuccessError returns an error describing a partial success
// response for the trace signal.
func TracePartialSuccessError(itemsRejected int64, errorMessage string) error {
return PartialSuccess{
ErrorMessage: errorMessage,
RejectedItems: itemsRejected,
RejectedKind: "spans",
}
}
// MetricPartialSuccessError returns an error describing a partial success
// response for the metric signal.
func MetricPartialSuccessError(itemsRejected int64, errorMessage string) error {
return PartialSuccess{
ErrorMessage: errorMessage,
RejectedItems: itemsRejected,
RejectedKind: "metric data points",
}
}

View File

@ -0,0 +1,145 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/retry/retry.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package retry provides request retry functionality that can perform
// configurable exponential backoff for transient errors and honor any
// explicit throttle responses received.
package retry // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/retry"
import (
"context"
"fmt"
"time"
"github.com/cenkalti/backoff/v4"
)
// DefaultConfig are the recommended defaults to use.
var DefaultConfig = Config{
Enabled: true,
InitialInterval: 5 * time.Second,
MaxInterval: 30 * time.Second,
MaxElapsedTime: time.Minute,
}
// Config defines configuration for retrying batches in case of export failure
// using an exponential backoff.
type Config struct {
// Enabled indicates whether to not retry sending batches in case of
// export failure.
Enabled bool
// InitialInterval the time to wait after the first failure before
// retrying.
InitialInterval time.Duration
// MaxInterval is the upper bound on backoff interval. Once this value is
// reached the delay between consecutive retries will always be
// `MaxInterval`.
MaxInterval time.Duration
// MaxElapsedTime is the maximum amount of time (including retries) spent
// trying to send a request/batch. Once this value is reached, the data
// is discarded.
MaxElapsedTime time.Duration
}
// RequestFunc wraps a request with retry logic.
type RequestFunc func(context.Context, func(context.Context) error) error
// EvaluateFunc returns if an error is retry-able and if an explicit throttle
// duration should be honored that was included in the error.
//
// The function must return true if the error argument is retry-able,
// otherwise it must return false for the first return parameter.
//
// The function must return a non-zero time.Duration if the error contains
// explicit throttle duration that should be honored, otherwise it must return
// a zero valued time.Duration.
type EvaluateFunc func(error) (bool, time.Duration)
// RequestFunc returns a RequestFunc using the evaluate function to determine
// if requests can be retried and based on the exponential backoff
// configuration of c.
func (c Config) RequestFunc(evaluate EvaluateFunc) RequestFunc {
if !c.Enabled {
return func(ctx context.Context, fn func(context.Context) error) error {
return fn(ctx)
}
}
return func(ctx context.Context, fn func(context.Context) error) error {
// Do not use NewExponentialBackOff since it calls Reset and the code here
// must call Reset after changing the InitialInterval (this saves an
// unnecessary call to Now).
b := &backoff.ExponentialBackOff{
InitialInterval: c.InitialInterval,
RandomizationFactor: backoff.DefaultRandomizationFactor,
Multiplier: backoff.DefaultMultiplier,
MaxInterval: c.MaxInterval,
MaxElapsedTime: c.MaxElapsedTime,
Stop: backoff.Stop,
Clock: backoff.SystemClock,
}
b.Reset()
for {
err := fn(ctx)
if err == nil {
return nil
}
retryable, throttle := evaluate(err)
if !retryable {
return err
}
bOff := b.NextBackOff()
if bOff == backoff.Stop {
return fmt.Errorf("max retry time elapsed: %w", err)
}
// Wait for the greater of the backoff or throttle delay.
var delay time.Duration
if bOff > throttle {
delay = bOff
} else {
elapsed := b.GetElapsedTime()
if b.MaxElapsedTime != 0 && elapsed+throttle > b.MaxElapsedTime {
return fmt.Errorf("max retry time would elapse: %w", err)
}
delay = throttle
}
if ctxErr := waitFunc(ctx, delay); ctxErr != nil {
return fmt.Errorf("%w: %w", ctxErr, err)
}
}
}
}
// Allow override for testing.
var waitFunc = wait
// wait takes the caller's context, and the amount of time to wait. It will
// return nil if the timer fires before or at the same time as the context's
// deadline. This indicates that the call can be retried.
func wait(ctx context.Context, delay time.Duration) error {
timer := time.NewTimer(delay)
defer timer.Stop()
select {
case <-ctx.Done():
// Handle the case where the timer and context deadline end
// simultaneously by prioritizing the timer expiration nil value
// response.
select {
case <-timer.C:
default:
return ctx.Err()
}
case <-timer.C:
}
return nil
}

View File

@ -0,0 +1,144 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlpmetric/transform/attribute.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package transform // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/transform"
import (
"go.opentelemetry.io/otel/attribute"
cpb "go.opentelemetry.io/proto/otlp/common/v1"
)
// AttrIter transforms an attribute iterator into OTLP key-values.
func AttrIter(iter attribute.Iterator) []*cpb.KeyValue {
l := iter.Len()
if l == 0 {
return nil
}
out := make([]*cpb.KeyValue, 0, l)
for iter.Next() {
out = append(out, KeyValue(iter.Attribute()))
}
return out
}
// KeyValues transforms a slice of attribute KeyValues into OTLP key-values.
func KeyValues(attrs []attribute.KeyValue) []*cpb.KeyValue {
if len(attrs) == 0 {
return nil
}
out := make([]*cpb.KeyValue, 0, len(attrs))
for _, kv := range attrs {
out = append(out, KeyValue(kv))
}
return out
}
// KeyValue transforms an attribute KeyValue into an OTLP key-value.
func KeyValue(kv attribute.KeyValue) *cpb.KeyValue {
return &cpb.KeyValue{Key: string(kv.Key), Value: Value(kv.Value)}
}
// Value transforms an attribute Value into an OTLP AnyValue.
func Value(v attribute.Value) *cpb.AnyValue {
av := new(cpb.AnyValue)
switch v.Type() {
case attribute.BOOL:
av.Value = &cpb.AnyValue_BoolValue{
BoolValue: v.AsBool(),
}
case attribute.BOOLSLICE:
av.Value = &cpb.AnyValue_ArrayValue{
ArrayValue: &cpb.ArrayValue{
Values: boolSliceValues(v.AsBoolSlice()),
},
}
case attribute.INT64:
av.Value = &cpb.AnyValue_IntValue{
IntValue: v.AsInt64(),
}
case attribute.INT64SLICE:
av.Value = &cpb.AnyValue_ArrayValue{
ArrayValue: &cpb.ArrayValue{
Values: int64SliceValues(v.AsInt64Slice()),
},
}
case attribute.FLOAT64:
av.Value = &cpb.AnyValue_DoubleValue{
DoubleValue: v.AsFloat64(),
}
case attribute.FLOAT64SLICE:
av.Value = &cpb.AnyValue_ArrayValue{
ArrayValue: &cpb.ArrayValue{
Values: float64SliceValues(v.AsFloat64Slice()),
},
}
case attribute.STRING:
av.Value = &cpb.AnyValue_StringValue{
StringValue: v.AsString(),
}
case attribute.STRINGSLICE:
av.Value = &cpb.AnyValue_ArrayValue{
ArrayValue: &cpb.ArrayValue{
Values: stringSliceValues(v.AsStringSlice()),
},
}
default:
av.Value = &cpb.AnyValue_StringValue{
StringValue: "INVALID",
}
}
return av
}
func boolSliceValues(vals []bool) []*cpb.AnyValue {
converted := make([]*cpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &cpb.AnyValue{
Value: &cpb.AnyValue_BoolValue{
BoolValue: v,
},
}
}
return converted
}
func int64SliceValues(vals []int64) []*cpb.AnyValue {
converted := make([]*cpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &cpb.AnyValue{
Value: &cpb.AnyValue_IntValue{
IntValue: v,
},
}
}
return converted
}
func float64SliceValues(vals []float64) []*cpb.AnyValue {
converted := make([]*cpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &cpb.AnyValue{
Value: &cpb.AnyValue_DoubleValue{
DoubleValue: v,
},
}
}
return converted
}
func stringSliceValues(vals []string) []*cpb.AnyValue {
converted := make([]*cpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &cpb.AnyValue{
Value: &cpb.AnyValue_StringValue{
StringValue: v,
},
}
}
return converted
}

View File

@ -0,0 +1,103 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlpmetric/transform/error.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package transform // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/transform"
import (
"errors"
"fmt"
"strings"
mpb "go.opentelemetry.io/proto/otlp/metrics/v1"
)
var (
errUnknownAggregation = errors.New("unknown aggregation")
errUnknownTemporality = errors.New("unknown temporality")
)
type errMetric struct {
m *mpb.Metric
err error
}
func (e errMetric) Unwrap() error {
return e.err
}
func (e errMetric) Error() string {
format := "invalid metric (name: %q, description: %q, unit: %q): %s"
return fmt.Sprintf(format, e.m.Name, e.m.Description, e.m.Unit, e.err)
}
func (e errMetric) Is(target error) bool {
return errors.Is(e.err, target)
}
// multiErr is used by the data-type transform functions to wrap multiple
// errors into a single return value. The error message will show all errors
// as a list and scope them by the datatype name that is returning them.
type multiErr struct {
datatype string
errs []error
}
// errOrNil returns nil if e contains no errors, otherwise it returns e.
func (e *multiErr) errOrNil() error {
if len(e.errs) == 0 {
return nil
}
return e
}
// append adds err to e. If err is a multiErr, its errs are flattened into e.
func (e *multiErr) append(err error) {
// Do not use errors.As here, this should only be flattened one layer. If
// there is a *multiErr several steps down the chain, all the errors above
// it will be discarded if errors.As is used instead.
switch other := err.(type) { //nolint:errorlint
case *multiErr:
// Flatten err errors into e.
e.errs = append(e.errs, other.errs...)
default:
e.errs = append(e.errs, err)
}
}
func (e *multiErr) Error() string {
es := make([]string, len(e.errs))
for i, err := range e.errs {
es[i] = fmt.Sprintf("* %s", err)
}
format := "%d errors occurred transforming %s:\n\t%s"
return fmt.Sprintf(format, len(es), e.datatype, strings.Join(es, "\n\t"))
}
func (e *multiErr) Unwrap() error {
switch len(e.errs) {
case 0:
return nil
case 1:
return e.errs[0]
}
// Return a multiErr without the leading error.
cp := &multiErr{
datatype: e.datatype,
errs: make([]error, len(e.errs)-1),
}
copy(cp.errs, e.errs[1:])
return cp
}
func (e *multiErr) Is(target error) bool {
if len(e.errs) == 0 {
return false
}
// Check if the first error is target.
return errors.Is(e.errs[0], target)
}

View File

@ -0,0 +1,352 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlpmetric/transform/metricdata.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package transform provides transformation functionality from the
// sdk/metric/metricdata data-types into OTLP data-types.
package transform // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal/transform"
import (
"fmt"
"time"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
cpb "go.opentelemetry.io/proto/otlp/common/v1"
mpb "go.opentelemetry.io/proto/otlp/metrics/v1"
rpb "go.opentelemetry.io/proto/otlp/resource/v1"
)
// ResourceMetrics returns an OTLP ResourceMetrics generated from rm. If rm
// contains invalid ScopeMetrics, an error will be returned along with an OTLP
// ResourceMetrics that contains partial OTLP ScopeMetrics.
func ResourceMetrics(rm *metricdata.ResourceMetrics) (*mpb.ResourceMetrics, error) {
sms, err := ScopeMetrics(rm.ScopeMetrics)
return &mpb.ResourceMetrics{
Resource: &rpb.Resource{
Attributes: AttrIter(rm.Resource.Iter()),
},
ScopeMetrics: sms,
SchemaUrl: rm.Resource.SchemaURL(),
}, err
}
// ScopeMetrics returns a slice of OTLP ScopeMetrics generated from sms. If
// sms contains invalid metric values, an error will be returned along with a
// slice that contains partial OTLP ScopeMetrics.
func ScopeMetrics(sms []metricdata.ScopeMetrics) ([]*mpb.ScopeMetrics, error) {
errs := &multiErr{datatype: "ScopeMetrics"}
out := make([]*mpb.ScopeMetrics, 0, len(sms))
for _, sm := range sms {
ms, err := Metrics(sm.Metrics)
if err != nil {
errs.append(err)
}
out = append(out, &mpb.ScopeMetrics{
Scope: &cpb.InstrumentationScope{
Name: sm.Scope.Name,
Version: sm.Scope.Version,
},
Metrics: ms,
SchemaUrl: sm.Scope.SchemaURL,
})
}
return out, errs.errOrNil()
}
// Metrics returns a slice of OTLP Metric generated from ms. If ms contains
// invalid metric values, an error will be returned along with a slice that
// contains partial OTLP Metrics.
func Metrics(ms []metricdata.Metrics) ([]*mpb.Metric, error) {
errs := &multiErr{datatype: "Metrics"}
out := make([]*mpb.Metric, 0, len(ms))
for _, m := range ms {
o, err := metric(m)
if err != nil {
// Do not include invalid data. Drop the metric, report the error.
errs.append(errMetric{m: o, err: err})
continue
}
out = append(out, o)
}
return out, errs.errOrNil()
}
func metric(m metricdata.Metrics) (*mpb.Metric, error) {
var err error
out := &mpb.Metric{
Name: m.Name,
Description: m.Description,
Unit: m.Unit,
}
switch a := m.Data.(type) {
case metricdata.Gauge[int64]:
out.Data = Gauge[int64](a)
case metricdata.Gauge[float64]:
out.Data = Gauge[float64](a)
case metricdata.Sum[int64]:
out.Data, err = Sum[int64](a)
case metricdata.Sum[float64]:
out.Data, err = Sum[float64](a)
case metricdata.Histogram[int64]:
out.Data, err = Histogram(a)
case metricdata.Histogram[float64]:
out.Data, err = Histogram(a)
case metricdata.ExponentialHistogram[int64]:
out.Data, err = ExponentialHistogram(a)
case metricdata.ExponentialHistogram[float64]:
out.Data, err = ExponentialHistogram(a)
case metricdata.Summary:
out.Data = Summary(a)
default:
return out, fmt.Errorf("%w: %T", errUnknownAggregation, a)
}
return out, err
}
// Gauge returns an OTLP Metric_Gauge generated from g.
func Gauge[N int64 | float64](g metricdata.Gauge[N]) *mpb.Metric_Gauge {
return &mpb.Metric_Gauge{
Gauge: &mpb.Gauge{
DataPoints: DataPoints(g.DataPoints),
},
}
}
// Sum returns an OTLP Metric_Sum generated from s. An error is returned
// if the temporality of s is unknown.
func Sum[N int64 | float64](s metricdata.Sum[N]) (*mpb.Metric_Sum, error) {
t, err := Temporality(s.Temporality)
if err != nil {
return nil, err
}
return &mpb.Metric_Sum{
Sum: &mpb.Sum{
AggregationTemporality: t,
IsMonotonic: s.IsMonotonic,
DataPoints: DataPoints(s.DataPoints),
},
}, nil
}
// DataPoints returns a slice of OTLP NumberDataPoint generated from dPts.
func DataPoints[N int64 | float64](dPts []metricdata.DataPoint[N]) []*mpb.NumberDataPoint {
out := make([]*mpb.NumberDataPoint, 0, len(dPts))
for _, dPt := range dPts {
ndp := &mpb.NumberDataPoint{
Attributes: AttrIter(dPt.Attributes.Iter()),
StartTimeUnixNano: timeUnixNano(dPt.StartTime),
TimeUnixNano: timeUnixNano(dPt.Time),
Exemplars: Exemplars(dPt.Exemplars),
}
switch v := any(dPt.Value).(type) {
case int64:
ndp.Value = &mpb.NumberDataPoint_AsInt{
AsInt: v,
}
case float64:
ndp.Value = &mpb.NumberDataPoint_AsDouble{
AsDouble: v,
}
}
out = append(out, ndp)
}
return out
}
// Histogram returns an OTLP Metric_Histogram generated from h. An error is
// returned if the temporality of h is unknown.
func Histogram[N int64 | float64](h metricdata.Histogram[N]) (*mpb.Metric_Histogram, error) {
t, err := Temporality(h.Temporality)
if err != nil {
return nil, err
}
return &mpb.Metric_Histogram{
Histogram: &mpb.Histogram{
AggregationTemporality: t,
DataPoints: HistogramDataPoints(h.DataPoints),
},
}, nil
}
// HistogramDataPoints returns a slice of OTLP HistogramDataPoint generated
// from dPts.
func HistogramDataPoints[N int64 | float64](dPts []metricdata.HistogramDataPoint[N]) []*mpb.HistogramDataPoint {
out := make([]*mpb.HistogramDataPoint, 0, len(dPts))
for _, dPt := range dPts {
sum := float64(dPt.Sum)
hdp := &mpb.HistogramDataPoint{
Attributes: AttrIter(dPt.Attributes.Iter()),
StartTimeUnixNano: timeUnixNano(dPt.StartTime),
TimeUnixNano: timeUnixNano(dPt.Time),
Count: dPt.Count,
Sum: &sum,
BucketCounts: dPt.BucketCounts,
ExplicitBounds: dPt.Bounds,
Exemplars: Exemplars(dPt.Exemplars),
}
if v, ok := dPt.Min.Value(); ok {
vF64 := float64(v)
hdp.Min = &vF64
}
if v, ok := dPt.Max.Value(); ok {
vF64 := float64(v)
hdp.Max = &vF64
}
out = append(out, hdp)
}
return out
}
// ExponentialHistogram returns an OTLP Metric_ExponentialHistogram generated from h. An error is
// returned if the temporality of h is unknown.
func ExponentialHistogram[N int64 | float64](h metricdata.ExponentialHistogram[N]) (*mpb.Metric_ExponentialHistogram, error) {
t, err := Temporality(h.Temporality)
if err != nil {
return nil, err
}
return &mpb.Metric_ExponentialHistogram{
ExponentialHistogram: &mpb.ExponentialHistogram{
AggregationTemporality: t,
DataPoints: ExponentialHistogramDataPoints(h.DataPoints),
},
}, nil
}
// ExponentialHistogramDataPoints returns a slice of OTLP ExponentialHistogramDataPoint generated
// from dPts.
func ExponentialHistogramDataPoints[N int64 | float64](dPts []metricdata.ExponentialHistogramDataPoint[N]) []*mpb.ExponentialHistogramDataPoint {
out := make([]*mpb.ExponentialHistogramDataPoint, 0, len(dPts))
for _, dPt := range dPts {
sum := float64(dPt.Sum)
ehdp := &mpb.ExponentialHistogramDataPoint{
Attributes: AttrIter(dPt.Attributes.Iter()),
StartTimeUnixNano: timeUnixNano(dPt.StartTime),
TimeUnixNano: timeUnixNano(dPt.Time),
Count: dPt.Count,
Sum: &sum,
Scale: dPt.Scale,
ZeroCount: dPt.ZeroCount,
Exemplars: Exemplars(dPt.Exemplars),
Positive: ExponentialHistogramDataPointBuckets(dPt.PositiveBucket),
Negative: ExponentialHistogramDataPointBuckets(dPt.NegativeBucket),
}
if v, ok := dPt.Min.Value(); ok {
vF64 := float64(v)
ehdp.Min = &vF64
}
if v, ok := dPt.Max.Value(); ok {
vF64 := float64(v)
ehdp.Max = &vF64
}
out = append(out, ehdp)
}
return out
}
// ExponentialHistogramDataPointBuckets returns an OTLP ExponentialHistogramDataPoint_Buckets generated
// from bucket.
func ExponentialHistogramDataPointBuckets(bucket metricdata.ExponentialBucket) *mpb.ExponentialHistogramDataPoint_Buckets {
return &mpb.ExponentialHistogramDataPoint_Buckets{
Offset: bucket.Offset,
BucketCounts: bucket.Counts,
}
}
// Temporality returns an OTLP AggregationTemporality generated from t. If t
// is unknown, an error is returned along with the invalid
// AggregationTemporality_AGGREGATION_TEMPORALITY_UNSPECIFIED.
func Temporality(t metricdata.Temporality) (mpb.AggregationTemporality, error) {
switch t {
case metricdata.DeltaTemporality:
return mpb.AggregationTemporality_AGGREGATION_TEMPORALITY_DELTA, nil
case metricdata.CumulativeTemporality:
return mpb.AggregationTemporality_AGGREGATION_TEMPORALITY_CUMULATIVE, nil
default:
err := fmt.Errorf("%w: %s", errUnknownTemporality, t)
return mpb.AggregationTemporality_AGGREGATION_TEMPORALITY_UNSPECIFIED, err
}
}
// timeUnixNano returns t as a Unix time, the number of nanoseconds elapsed
// since January 1, 1970 UTC as uint64.
// The result is undefined if the Unix time
// in nanoseconds cannot be represented by an int64
// (a date before the year 1678 or after 2262).
// timeUnixNano on the zero Time returns 0.
// The result does not depend on the location associated with t.
func timeUnixNano(t time.Time) uint64 {
if t.IsZero() {
return 0
}
return uint64(t.UnixNano())
}
// Exemplars returns a slice of OTLP Exemplars generated from exemplars.
func Exemplars[N int64 | float64](exemplars []metricdata.Exemplar[N]) []*mpb.Exemplar {
out := make([]*mpb.Exemplar, 0, len(exemplars))
for _, exemplar := range exemplars {
e := &mpb.Exemplar{
FilteredAttributes: KeyValues(exemplar.FilteredAttributes),
TimeUnixNano: timeUnixNano(exemplar.Time),
SpanId: exemplar.SpanID,
TraceId: exemplar.TraceID,
}
switch v := any(exemplar.Value).(type) {
case int64:
e.Value = &mpb.Exemplar_AsInt{
AsInt: v,
}
case float64:
e.Value = &mpb.Exemplar_AsDouble{
AsDouble: v,
}
}
out = append(out, e)
}
return out
}
// Summary returns an OTLP Metric_Summary generated from s.
func Summary(s metricdata.Summary) *mpb.Metric_Summary {
return &mpb.Metric_Summary{
Summary: &mpb.Summary{
DataPoints: SummaryDataPoints(s.DataPoints),
},
}
}
// SummaryDataPoints returns a slice of OTLP SummaryDataPoint generated from
// dPts.
func SummaryDataPoints(dPts []metricdata.SummaryDataPoint) []*mpb.SummaryDataPoint {
out := make([]*mpb.SummaryDataPoint, 0, len(dPts))
for _, dPt := range dPts {
sdp := &mpb.SummaryDataPoint{
Attributes: AttrIter(dPt.Attributes.Iter()),
StartTimeUnixNano: timeUnixNano(dPt.StartTime),
TimeUnixNano: timeUnixNano(dPt.Time),
Count: dPt.Count,
Sum: dPt.Sum,
QuantileValues: QuantileValues(dPt.QuantileValues),
}
out = append(out, sdp)
}
return out
}
// QuantileValues returns a slice of OTLP SummaryDataPoint_ValueAtQuantile
// generated from quantiles.
func QuantileValues(quantiles []metricdata.QuantileValue) []*mpb.SummaryDataPoint_ValueAtQuantile {
out := make([]*mpb.SummaryDataPoint_ValueAtQuantile, 0, len(quantiles))
for _, q := range quantiles {
quantile := &mpb.SummaryDataPoint_ValueAtQuantile{
Quantile: q.Quantile,
Value: q.Value,
}
out = append(out, quantile)
}
return out
}

View File

@ -0,0 +1,9 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpmetricgrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc"
// Version is the current release version of the OpenTelemetry OTLP over gRPC metrics exporter in use.
func Version() string {
return "1.28.0"
}

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,3 @@
# OTLP Trace Exporter
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/exporters/otlp/otlptrace)](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace)

View File

@ -0,0 +1,43 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
import (
"context"
tracepb "go.opentelemetry.io/proto/otlp/trace/v1"
)
// Client manages connections to the collector, handles the
// transformation of data into wire format, and the transmission of that
// data to the collector.
type Client interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Start should establish connection(s) to endpoint(s). It is
// called just once by the exporter, so the implementation
// does not need to worry about idempotence and locking.
Start(ctx context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Stop should close the connections. The function is called
// only once by the exporter, so the implementation does not
// need to worry about idempotence, but it may be called
// concurrently with UploadTraces, so proper
// locking is required. The function serves as a
// synchronization point - after the function returns, the
// process of closing connections is assumed to be finished.
Stop(ctx context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// UploadTraces should transform the passed traces to the wire
// format and send it to the collector. May be called
// concurrently.
UploadTraces(ctx context.Context, protoSpans []*tracepb.ResourceSpans) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}

View File

@ -0,0 +1,10 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package otlptrace contains abstractions for OTLP span exporters.
See the official OTLP span exporter implementations:
- [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc],
- [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp].
*/
package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"

View File

@ -0,0 +1,105 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
import (
"context"
"errors"
"fmt"
"sync"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
tracesdk "go.opentelemetry.io/otel/sdk/trace"
)
var errAlreadyStarted = errors.New("already started")
// Exporter exports trace data in the OTLP wire format.
type Exporter struct {
client Client
mu sync.RWMutex
started bool
startOnce sync.Once
stopOnce sync.Once
}
// ExportSpans exports a batch of spans.
func (e *Exporter) ExportSpans(ctx context.Context, ss []tracesdk.ReadOnlySpan) error {
protoSpans := tracetransform.Spans(ss)
if len(protoSpans) == 0 {
return nil
}
err := e.client.UploadTraces(ctx, protoSpans)
if err != nil {
return fmt.Errorf("traces export: %w", err)
}
return nil
}
// Start establishes a connection to the receiving endpoint.
func (e *Exporter) Start(ctx context.Context) error {
err := errAlreadyStarted
e.startOnce.Do(func() {
e.mu.Lock()
e.started = true
e.mu.Unlock()
err = e.client.Start(ctx)
})
return err
}
// Shutdown flushes all exports and closes all connections to the receiving endpoint.
func (e *Exporter) Shutdown(ctx context.Context) error {
e.mu.RLock()
started := e.started
e.mu.RUnlock()
if !started {
return nil
}
var err error
e.stopOnce.Do(func() {
err = e.client.Stop(ctx)
e.mu.Lock()
e.started = false
e.mu.Unlock()
})
return err
}
var _ tracesdk.SpanExporter = (*Exporter)(nil)
// New constructs a new Exporter and starts it.
func New(ctx context.Context, client Client) (*Exporter, error) {
exp := NewUnstarted(client)
if err := exp.Start(ctx); err != nil {
return nil, err
}
return exp, nil
}
// NewUnstarted constructs a new Exporter and does not start it.
func NewUnstarted(client Client) *Exporter {
return &Exporter{
client: client,
}
}
// MarshalLog is the marshaling function used by the logging system to represent this Exporter.
func (e *Exporter) MarshalLog() interface{} {
return struct {
Type string
Client Client
}{
Type: "otlptrace",
Client: e.client,
}
}

View File

@ -0,0 +1,147 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/resource"
commonpb "go.opentelemetry.io/proto/otlp/common/v1"
)
// KeyValues transforms a slice of attribute KeyValues into OTLP key-values.
func KeyValues(attrs []attribute.KeyValue) []*commonpb.KeyValue {
if len(attrs) == 0 {
return nil
}
out := make([]*commonpb.KeyValue, 0, len(attrs))
for _, kv := range attrs {
out = append(out, KeyValue(kv))
}
return out
}
// Iterator transforms an attribute iterator into OTLP key-values.
func Iterator(iter attribute.Iterator) []*commonpb.KeyValue {
l := iter.Len()
if l == 0 {
return nil
}
out := make([]*commonpb.KeyValue, 0, l)
for iter.Next() {
out = append(out, KeyValue(iter.Attribute()))
}
return out
}
// ResourceAttributes transforms a Resource OTLP key-values.
func ResourceAttributes(res *resource.Resource) []*commonpb.KeyValue {
return Iterator(res.Iter())
}
// KeyValue transforms an attribute KeyValue into an OTLP key-value.
func KeyValue(kv attribute.KeyValue) *commonpb.KeyValue {
return &commonpb.KeyValue{Key: string(kv.Key), Value: Value(kv.Value)}
}
// Value transforms an attribute Value into an OTLP AnyValue.
func Value(v attribute.Value) *commonpb.AnyValue {
av := new(commonpb.AnyValue)
switch v.Type() {
case attribute.BOOL:
av.Value = &commonpb.AnyValue_BoolValue{
BoolValue: v.AsBool(),
}
case attribute.BOOLSLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: boolSliceValues(v.AsBoolSlice()),
},
}
case attribute.INT64:
av.Value = &commonpb.AnyValue_IntValue{
IntValue: v.AsInt64(),
}
case attribute.INT64SLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: int64SliceValues(v.AsInt64Slice()),
},
}
case attribute.FLOAT64:
av.Value = &commonpb.AnyValue_DoubleValue{
DoubleValue: v.AsFloat64(),
}
case attribute.FLOAT64SLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: float64SliceValues(v.AsFloat64Slice()),
},
}
case attribute.STRING:
av.Value = &commonpb.AnyValue_StringValue{
StringValue: v.AsString(),
}
case attribute.STRINGSLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: stringSliceValues(v.AsStringSlice()),
},
}
default:
av.Value = &commonpb.AnyValue_StringValue{
StringValue: "INVALID",
}
}
return av
}
func boolSliceValues(vals []bool) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_BoolValue{
BoolValue: v,
},
}
}
return converted
}
func int64SliceValues(vals []int64) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_IntValue{
IntValue: v,
},
}
}
return converted
}
func float64SliceValues(vals []float64) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_DoubleValue{
DoubleValue: v,
},
}
}
return converted
}
func stringSliceValues(vals []string) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_StringValue{
StringValue: v,
},
}
}
return converted
}

View File

@ -0,0 +1,19 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/sdk/instrumentation"
commonpb "go.opentelemetry.io/proto/otlp/common/v1"
)
func InstrumentationScope(il instrumentation.Scope) *commonpb.InstrumentationScope {
if il == (instrumentation.Scope{}) {
return nil
}
return &commonpb.InstrumentationScope{
Name: il.Name,
Version: il.Version,
}
}

View File

@ -0,0 +1,17 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/sdk/resource"
resourcepb "go.opentelemetry.io/proto/otlp/resource/v1"
)
// Resource transforms a Resource into an OTLP Resource.
func Resource(r *resource.Resource) *resourcepb.Resource {
if r == nil {
return nil
}
return &resourcepb.Resource{Attributes: ResourceAttributes(r)}
}

View File

@ -0,0 +1,207 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/sdk/instrumentation"
tracesdk "go.opentelemetry.io/otel/sdk/trace"
"go.opentelemetry.io/otel/trace"
tracepb "go.opentelemetry.io/proto/otlp/trace/v1"
)
// Spans transforms a slice of OpenTelemetry spans into a slice of OTLP
// ResourceSpans.
func Spans(sdl []tracesdk.ReadOnlySpan) []*tracepb.ResourceSpans {
if len(sdl) == 0 {
return nil
}
rsm := make(map[attribute.Distinct]*tracepb.ResourceSpans)
type key struct {
r attribute.Distinct
is instrumentation.Scope
}
ssm := make(map[key]*tracepb.ScopeSpans)
var resources int
for _, sd := range sdl {
if sd == nil {
continue
}
rKey := sd.Resource().Equivalent()
k := key{
r: rKey,
is: sd.InstrumentationScope(),
}
scopeSpan, iOk := ssm[k]
if !iOk {
// Either the resource or instrumentation scope were unknown.
scopeSpan = &tracepb.ScopeSpans{
Scope: InstrumentationScope(sd.InstrumentationScope()),
Spans: []*tracepb.Span{},
SchemaUrl: sd.InstrumentationScope().SchemaURL,
}
}
scopeSpan.Spans = append(scopeSpan.Spans, span(sd))
ssm[k] = scopeSpan
rs, rOk := rsm[rKey]
if !rOk {
resources++
// The resource was unknown.
rs = &tracepb.ResourceSpans{
Resource: Resource(sd.Resource()),
ScopeSpans: []*tracepb.ScopeSpans{scopeSpan},
SchemaUrl: sd.Resource().SchemaURL(),
}
rsm[rKey] = rs
continue
}
// The resource has been seen before. Check if the instrumentation
// library lookup was unknown because if so we need to add it to the
// ResourceSpans. Otherwise, the instrumentation library has already
// been seen and the append we did above will be included it in the
// ScopeSpans reference.
if !iOk {
rs.ScopeSpans = append(rs.ScopeSpans, scopeSpan)
}
}
// Transform the categorized map into a slice
rss := make([]*tracepb.ResourceSpans, 0, resources)
for _, rs := range rsm {
rss = append(rss, rs)
}
return rss
}
// span transforms a Span into an OTLP span.
func span(sd tracesdk.ReadOnlySpan) *tracepb.Span {
if sd == nil {
return nil
}
tid := sd.SpanContext().TraceID()
sid := sd.SpanContext().SpanID()
s := &tracepb.Span{
TraceId: tid[:],
SpanId: sid[:],
TraceState: sd.SpanContext().TraceState().String(),
Status: status(sd.Status().Code, sd.Status().Description),
StartTimeUnixNano: uint64(sd.StartTime().UnixNano()),
EndTimeUnixNano: uint64(sd.EndTime().UnixNano()),
Links: links(sd.Links()),
Kind: spanKind(sd.SpanKind()),
Name: sd.Name(),
Attributes: KeyValues(sd.Attributes()),
Events: spanEvents(sd.Events()),
DroppedAttributesCount: uint32(sd.DroppedAttributes()),
DroppedEventsCount: uint32(sd.DroppedEvents()),
DroppedLinksCount: uint32(sd.DroppedLinks()),
}
if psid := sd.Parent().SpanID(); psid.IsValid() {
s.ParentSpanId = psid[:]
}
s.Flags = buildSpanFlags(sd.Parent())
return s
}
// status transform a span code and message into an OTLP span status.
func status(status codes.Code, message string) *tracepb.Status {
var c tracepb.Status_StatusCode
switch status {
case codes.Ok:
c = tracepb.Status_STATUS_CODE_OK
case codes.Error:
c = tracepb.Status_STATUS_CODE_ERROR
default:
c = tracepb.Status_STATUS_CODE_UNSET
}
return &tracepb.Status{
Code: c,
Message: message,
}
}
// links transforms span Links to OTLP span links.
func links(links []tracesdk.Link) []*tracepb.Span_Link {
if len(links) == 0 {
return nil
}
sl := make([]*tracepb.Span_Link, 0, len(links))
for _, otLink := range links {
// This redefinition is necessary to prevent otLink.*ID[:] copies
// being reused -- in short we need a new otLink per iteration.
otLink := otLink
tid := otLink.SpanContext.TraceID()
sid := otLink.SpanContext.SpanID()
flags := buildSpanFlags(otLink.SpanContext)
sl = append(sl, &tracepb.Span_Link{
TraceId: tid[:],
SpanId: sid[:],
Attributes: KeyValues(otLink.Attributes),
DroppedAttributesCount: uint32(otLink.DroppedAttributeCount),
Flags: flags,
})
}
return sl
}
func buildSpanFlags(sc trace.SpanContext) uint32 {
flags := tracepb.SpanFlags_SPAN_FLAGS_CONTEXT_HAS_IS_REMOTE_MASK
if sc.IsRemote() {
flags |= tracepb.SpanFlags_SPAN_FLAGS_CONTEXT_IS_REMOTE_MASK
}
return uint32(flags)
}
// spanEvents transforms span Events to an OTLP span events.
func spanEvents(es []tracesdk.Event) []*tracepb.Span_Event {
if len(es) == 0 {
return nil
}
events := make([]*tracepb.Span_Event, len(es))
// Transform message events
for i := 0; i < len(es); i++ {
events[i] = &tracepb.Span_Event{
Name: es[i].Name,
TimeUnixNano: uint64(es[i].Time.UnixNano()),
Attributes: KeyValues(es[i].Attributes),
DroppedAttributesCount: uint32(es[i].DroppedAttributeCount),
}
}
return events
}
// spanKind transforms a SpanKind to an OTLP span kind.
func spanKind(kind trace.SpanKind) tracepb.Span_SpanKind {
switch kind {
case trace.SpanKindInternal:
return tracepb.Span_SPAN_KIND_INTERNAL
case trace.SpanKindClient:
return tracepb.Span_SPAN_KIND_CLIENT
case trace.SpanKindServer:
return tracepb.Span_SPAN_KIND_SERVER
case trace.SpanKindProducer:
return tracepb.Span_SPAN_KIND_PRODUCER
case trace.SpanKindConsumer:
return tracepb.Span_SPAN_KIND_CONSUMER
default:
return tracepb.Span_SPAN_KIND_UNSPECIFIED
}
}

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,3 @@
# OTLP Trace gRPC Exporter
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc)](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc)

View File

@ -0,0 +1,295 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlptracegrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
import (
"context"
"errors"
"sync"
"time"
"google.golang.org/genproto/googleapis/rpc/errdetails"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry"
coltracepb "go.opentelemetry.io/proto/otlp/collector/trace/v1"
tracepb "go.opentelemetry.io/proto/otlp/trace/v1"
)
type client struct {
endpoint string
dialOpts []grpc.DialOption
metadata metadata.MD
exportTimeout time.Duration
requestFunc retry.RequestFunc
// stopCtx is used as a parent context for all exports. Therefore, when it
// is canceled with the stopFunc all exports are canceled.
stopCtx context.Context
// stopFunc cancels stopCtx, stopping any active exports.
stopFunc context.CancelFunc
// ourConn keeps track of where conn was created: true if created here on
// Start, or false if passed with an option. This is important on Shutdown
// as the conn should only be closed if created here on start. Otherwise,
// it is up to the processes that passed the conn to close it.
ourConn bool
conn *grpc.ClientConn
tscMu sync.RWMutex
tsc coltracepb.TraceServiceClient
}
// Compile time check *client implements otlptrace.Client.
var _ otlptrace.Client = (*client)(nil)
// NewClient creates a new gRPC trace client.
func NewClient(opts ...Option) otlptrace.Client {
return newClient(opts...)
}
func newClient(opts ...Option) *client {
cfg := otlpconfig.NewGRPCConfig(asGRPCOptions(opts)...)
ctx, cancel := context.WithCancel(context.Background())
c := &client{
endpoint: cfg.Traces.Endpoint,
exportTimeout: cfg.Traces.Timeout,
requestFunc: cfg.RetryConfig.RequestFunc(retryable),
dialOpts: cfg.DialOptions,
stopCtx: ctx,
stopFunc: cancel,
conn: cfg.GRPCConn,
}
if len(cfg.Traces.Headers) > 0 {
c.metadata = metadata.New(cfg.Traces.Headers)
}
return c
}
// Start establishes a gRPC connection to the collector.
func (c *client) Start(context.Context) error {
if c.conn == nil {
// If the caller did not provide a ClientConn when the client was
// created, create one using the configuration they did provide.
conn, err := grpc.NewClient(c.endpoint, c.dialOpts...)
if err != nil {
return err
}
// Keep track that we own the lifecycle of this conn and need to close
// it on Shutdown.
c.ourConn = true
c.conn = conn
}
// The otlptrace.Client interface states this method is called just once,
// so no need to check if already started.
c.tscMu.Lock()
c.tsc = coltracepb.NewTraceServiceClient(c.conn)
c.tscMu.Unlock()
return nil
}
var errAlreadyStopped = errors.New("the client is already stopped")
// Stop shuts down the client.
//
// Any active connections to a remote endpoint are closed if they were created
// by the client. Any gRPC connection passed during creation using
// WithGRPCConn will not be closed. It is the caller's responsibility to
// handle cleanup of that resource.
//
// This method synchronizes with the UploadTraces method of the client. It
// will wait for any active calls to that method to complete unimpeded, or it
// will cancel any active calls if ctx expires. If ctx expires, the context
// error will be forwarded as the returned error. All client held resources
// will still be released in this situation.
//
// If the client has already stopped, an error will be returned describing
// this.
func (c *client) Stop(ctx context.Context) error {
// Make sure to return context error if the context is done when calling this method.
err := ctx.Err()
// Acquire the c.tscMu lock within the ctx lifetime.
acquired := make(chan struct{})
go func() {
c.tscMu.Lock()
close(acquired)
}()
select {
case <-ctx.Done():
// The Stop timeout is reached. Kill any remaining exports to force
// the clear of the lock and save the timeout error to return and
// signal the shutdown timed out before cleanly stopping.
c.stopFunc()
err = ctx.Err()
// To ensure the client is not left in a dirty state c.tsc needs to be
// set to nil. To avoid the race condition when doing this, ensure
// that all the exports are killed (initiated by c.stopFunc).
<-acquired
case <-acquired:
}
// Hold the tscMu lock for the rest of the function to ensure no new
// exports are started.
defer c.tscMu.Unlock()
// The otlptrace.Client interface states this method is called only
// once, but there is no guarantee it is called after Start. Ensure the
// client is started before doing anything and let the called know if they
// made a mistake.
if c.tsc == nil {
return errAlreadyStopped
}
// Clear c.tsc to signal the client is stopped.
c.tsc = nil
if c.ourConn {
closeErr := c.conn.Close()
// A context timeout error takes precedence over this error.
if err == nil && closeErr != nil {
err = closeErr
}
}
return err
}
var errShutdown = errors.New("the client is shutdown")
// UploadTraces sends a batch of spans.
//
// Retryable errors from the server will be handled according to any
// RetryConfig the client was created with.
func (c *client) UploadTraces(ctx context.Context, protoSpans []*tracepb.ResourceSpans) error {
// Hold a read lock to ensure a shut down initiated after this starts does
// not abandon the export. This read lock acquire has less priority than a
// write lock acquire (i.e. Stop), meaning if the client is shutting down
// this will come after the shut down.
c.tscMu.RLock()
defer c.tscMu.RUnlock()
if c.tsc == nil {
return errShutdown
}
ctx, cancel := c.exportContext(ctx)
defer cancel()
return c.requestFunc(ctx, func(iCtx context.Context) error {
resp, err := c.tsc.Export(iCtx, &coltracepb.ExportTraceServiceRequest{
ResourceSpans: protoSpans,
})
if resp != nil && resp.PartialSuccess != nil {
msg := resp.PartialSuccess.GetErrorMessage()
n := resp.PartialSuccess.GetRejectedSpans()
if n != 0 || msg != "" {
err := internal.TracePartialSuccessError(n, msg)
otel.Handle(err)
}
}
// nil is converted to OK.
if status.Code(err) == codes.OK {
// Success.
return nil
}
return err
})
}
// exportContext returns a copy of parent with an appropriate deadline and
// cancellation function.
//
// It is the callers responsibility to cancel the returned context once its
// use is complete, via the parent or directly with the returned CancelFunc, to
// ensure all resources are correctly released.
func (c *client) exportContext(parent context.Context) (context.Context, context.CancelFunc) {
var (
ctx context.Context
cancel context.CancelFunc
)
if c.exportTimeout > 0 {
ctx, cancel = context.WithTimeout(parent, c.exportTimeout)
} else {
ctx, cancel = context.WithCancel(parent)
}
if c.metadata.Len() > 0 {
ctx = metadata.NewOutgoingContext(ctx, c.metadata)
}
// Unify the client stopCtx with the parent.
go func() {
select {
case <-ctx.Done():
case <-c.stopCtx.Done():
// Cancel the export as the shutdown has timed out.
cancel()
}
}()
return ctx, cancel
}
// retryable returns if err identifies a request that can be retried and a
// duration to wait for if an explicit throttle time is included in err.
func retryable(err error) (bool, time.Duration) {
s := status.Convert(err)
return retryableGRPCStatus(s)
}
func retryableGRPCStatus(s *status.Status) (bool, time.Duration) {
switch s.Code() {
case codes.Canceled,
codes.DeadlineExceeded,
codes.Aborted,
codes.OutOfRange,
codes.Unavailable,
codes.DataLoss:
// Additionally handle RetryInfo.
_, d := throttleDelay(s)
return true, d
case codes.ResourceExhausted:
// Retry only if the server signals that the recovery from resource exhaustion is possible.
return throttleDelay(s)
}
// Not a retry-able error.
return false, 0
}
// throttleDelay returns of the status is RetryInfo
// and the its duration to wait for if an explicit throttle time.
func throttleDelay(s *status.Status) (bool, time.Duration) {
for _, detail := range s.Details() {
if t, ok := detail.(*errdetails.RetryInfo); ok {
return true, t.RetryDelay.AsDuration()
}
}
return false, 0
}
// MarshalLog is the marshaling function used by the logging system to represent this Client.
func (c *client) MarshalLog() interface{} {
return struct {
Type string
Endpoint string
}{
Type: "otlphttpgrpc",
Endpoint: c.endpoint,
}
}

View File

@ -0,0 +1,66 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package otlptracegrpc provides an OTLP span exporter using gRPC.
By default the telemetry is sent to https://localhost:4317.
Exporter should be created using [New].
The environment variables described below can be used for configuration.
OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT (default: "https://localhost:4317") -
target to which the exporter sends telemetry.
The target syntax is defined in https://github.com/grpc/grpc/blob/master/doc/naming.md.
The value must contain a host.
The value may additionally a port, a scheme, and a path.
The value accepts "http" and "https" scheme.
The value should not contain a query string or fragment.
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT takes precedence over OTEL_EXPORTER_OTLP_ENDPOINT.
The configuration can be overridden by [WithEndpoint], [WithEndpointURL], [WithInsecure], and [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_INSECURE, OTEL_EXPORTER_OTLP_TRACES_INSECURE (default: "false") -
setting "true" disables client transport security for the exporter's gRPC connection.
You can use this only when an endpoint is provided without the http or https scheme.
OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT setting overrides
the scheme defined via OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT.
OTEL_EXPORTER_OTLP_TRACES_INSECURE takes precedence over OTEL_EXPORTER_OTLP_INSECURE.
The configuration can be overridden by [WithInsecure], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_HEADERS, OTEL_EXPORTER_OTLP_TRACES_HEADERS (default: none) -
key-value pairs used as gRPC metadata associated with gRPC requests.
The value is expected to be represented in a format matching the [W3C Baggage HTTP Header Content Format],
except that additional semi-colon delimited metadata is not supported.
Example value: "key1=value1,key2=value2".
OTEL_EXPORTER_OTLP_TRACES_HEADERS takes precedence over OTEL_EXPORTER_OTLP_HEADERS.
The configuration can be overridden by [WithHeaders] option.
OTEL_EXPORTER_OTLP_TIMEOUT, OTEL_EXPORTER_OTLP_TRACES_TIMEOUT (default: "10000") -
maximum time in milliseconds the OTLP exporter waits for each batch export.
OTEL_EXPORTER_OTLP_TRACES_TIMEOUT takes precedence over OTEL_EXPORTER_OTLP_TIMEOUT.
The configuration can be overridden by [WithTimeout] option.
OTEL_EXPORTER_OTLP_COMPRESSION, OTEL_EXPORTER_OTLP_TRACES_COMPRESSION (default: none) -
the gRPC compressor the exporter uses.
Supported value: "gzip".
OTEL_EXPORTER_OTLP_TRACES_COMPRESSION takes precedence over OTEL_EXPORTER_OTLP_COMPRESSION.
The configuration can be overridden by [WithCompressor], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE (default: none) -
the filepath to the trusted certificate to use when verifying a server's TLS credentials.
OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CERTIFICATE.
The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE (default: none) -
the filepath to the client certificate/chain trust for client's private key to use in mTLS communication in PEM format.
OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE.
The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_CLIENT_KEY, OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY (default: none) -
the filepath to the client's private key to use in mTLS communication in PEM format.
OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY takes precedence over OTEL_EXPORTER_OTLP_CLIENT_KEY.
The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] option.
[W3C Baggage HTTP Header Content Format]: https://www.w3.org/TR/baggage/#header-content
*/
package otlptracegrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"

View File

@ -0,0 +1,20 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlptracegrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
import (
"context"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
)
// New constructs a new Exporter and starts it.
func New(ctx context.Context, opts ...Option) (*otlptrace.Exporter, error) {
return otlptrace.New(ctx, NewClient(opts...))
}
// NewUnstarted constructs a new Exporter and does not start it.
func NewUnstarted(opts ...Option) *otlptrace.Exporter {
return otlptrace.NewUnstarted(NewClient(opts...))
}

View File

@ -0,0 +1,191 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/envconfig/envconfig.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package envconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig"
import (
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"net/url"
"strconv"
"strings"
"time"
"go.opentelemetry.io/otel/internal/global"
)
// ConfigFn is the generic function used to set a config.
type ConfigFn func(*EnvOptionsReader)
// EnvOptionsReader reads the required environment variables.
type EnvOptionsReader struct {
GetEnv func(string) string
ReadFile func(string) ([]byte, error)
Namespace string
}
// Apply runs every ConfigFn.
func (e *EnvOptionsReader) Apply(opts ...ConfigFn) {
for _, o := range opts {
o(e)
}
}
// GetEnvValue gets an OTLP environment variable value of the specified key
// using the GetEnv function.
// This function prepends the OTLP specified namespace to all key lookups.
func (e *EnvOptionsReader) GetEnvValue(key string) (string, bool) {
v := strings.TrimSpace(e.GetEnv(keyWithNamespace(e.Namespace, key)))
return v, v != ""
}
// WithString retrieves the specified config and passes it to ConfigFn as a string.
func WithString(n string, fn func(string)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
fn(v)
}
}
}
// WithBool returns a ConfigFn that reads the environment variable n and if it exists passes its parsed bool value to fn.
func WithBool(n string, fn func(bool)) ConfigFn {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
b := strings.ToLower(v) == "true"
fn(b)
}
}
}
// WithDuration retrieves the specified config and passes it to ConfigFn as a duration.
func WithDuration(n string, fn func(time.Duration)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
d, err := strconv.Atoi(v)
if err != nil {
global.Error(err, "parse duration", "input", v)
return
}
fn(time.Duration(d) * time.Millisecond)
}
}
}
// WithHeaders retrieves the specified config and passes it to ConfigFn as a map of HTTP headers.
func WithHeaders(n string, fn func(map[string]string)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
fn(stringToHeader(v))
}
}
}
// WithURL retrieves the specified config and passes it to ConfigFn as a net/url.URL.
func WithURL(n string, fn func(*url.URL)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
u, err := url.Parse(v)
if err != nil {
global.Error(err, "parse url", "input", v)
return
}
fn(u)
}
}
}
// WithCertPool returns a ConfigFn that reads the environment variable n as a filepath to a TLS certificate pool. If it exists, it is parsed as a crypto/x509.CertPool and it is passed to fn.
func WithCertPool(n string, fn func(*x509.CertPool)) ConfigFn {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
b, err := e.ReadFile(v)
if err != nil {
global.Error(err, "read tls ca cert file", "file", v)
return
}
c, err := createCertPool(b)
if err != nil {
global.Error(err, "create tls cert pool")
return
}
fn(c)
}
}
}
// WithClientCert returns a ConfigFn that reads the environment variable nc and nk as filepaths to a client certificate and key pair. If they exists, they are parsed as a crypto/tls.Certificate and it is passed to fn.
func WithClientCert(nc, nk string, fn func(tls.Certificate)) ConfigFn {
return func(e *EnvOptionsReader) {
vc, okc := e.GetEnvValue(nc)
vk, okk := e.GetEnvValue(nk)
if !okc || !okk {
return
}
cert, err := e.ReadFile(vc)
if err != nil {
global.Error(err, "read tls client cert", "file", vc)
return
}
key, err := e.ReadFile(vk)
if err != nil {
global.Error(err, "read tls client key", "file", vk)
return
}
crt, err := tls.X509KeyPair(cert, key)
if err != nil {
global.Error(err, "create tls client key pair")
return
}
fn(crt)
}
}
func keyWithNamespace(ns, key string) string {
if ns == "" {
return key
}
return fmt.Sprintf("%s_%s", ns, key)
}
func stringToHeader(value string) map[string]string {
headersPairs := strings.Split(value, ",")
headers := make(map[string]string)
for _, header := range headersPairs {
n, v, found := strings.Cut(header, "=")
if !found {
global.Error(errors.New("missing '="), "parse headers", "input", header)
continue
}
name, err := url.PathUnescape(n)
if err != nil {
global.Error(err, "escape header key", "key", n)
continue
}
trimmedName := strings.TrimSpace(name)
value, err := url.PathUnescape(v)
if err != nil {
global.Error(err, "escape header value", "value", v)
continue
}
trimmedValue := strings.TrimSpace(value)
headers[trimmedName] = trimmedValue
}
return headers
}
func createCertPool(certBytes []byte) (*x509.CertPool, error) {
cp := x509.NewCertPool()
if ok := cp.AppendCertsFromPEM(certBytes); !ok {
return nil, errors.New("failed to append certificate to the cert pool")
}
return cp, nil
}

View File

@ -0,0 +1,24 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package internal // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal"
//go:generate gotmpl --body=../../../../../internal/shared/otlp/partialsuccess.go.tmpl "--data={}" --out=partialsuccess.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/partialsuccess_test.go.tmpl "--data={}" --out=partialsuccess_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/retry/retry.go.tmpl "--data={}" --out=retry/retry.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/retry/retry_test.go.tmpl "--data={}" --out=retry/retry_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/envconfig/envconfig.go.tmpl "--data={}" --out=envconfig/envconfig.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/envconfig/envconfig_test.go.tmpl "--data={}" --out=envconfig/envconfig_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlpconfig/envconfig.go.tmpl "--data={\"envconfigImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig\"}" --out=otlpconfig/envconfig.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlpconfig/options.go.tmpl "--data={\"retryImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry\"}" --out=otlpconfig/options.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlpconfig/options_test.go.tmpl "--data={\"envconfigImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig\"}" --out=otlpconfig/options_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlpconfig/optiontypes.go.tmpl "--data={}" --out=otlpconfig/optiontypes.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlpconfig/tls.go.tmpl "--data={}" --out=otlpconfig/tls.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlptracetest/client.go.tmpl "--data={}" --out=otlptracetest/client.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlptracetest/collector.go.tmpl "--data={}" --out=otlptracetest/collector.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlptracetest/data.go.tmpl "--data={}" --out=otlptracetest/data.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlptracetest/otlptest.go.tmpl "--data={}" --out=otlptracetest/otlptest.go

View File

@ -0,0 +1,142 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlptrace/otlpconfig/envconfig.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
import (
"crypto/tls"
"crypto/x509"
"net/url"
"os"
"path"
"strings"
"time"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig"
)
// DefaultEnvOptionsReader is the default environments reader.
var DefaultEnvOptionsReader = envconfig.EnvOptionsReader{
GetEnv: os.Getenv,
ReadFile: os.ReadFile,
Namespace: "OTEL_EXPORTER_OTLP",
}
// ApplyGRPCEnvConfigs applies the env configurations for gRPC.
func ApplyGRPCEnvConfigs(cfg Config) Config {
opts := getOptionsFromEnv()
for _, opt := range opts {
cfg = opt.ApplyGRPCOption(cfg)
}
return cfg
}
// ApplyHTTPEnvConfigs applies the env configurations for HTTP.
func ApplyHTTPEnvConfigs(cfg Config) Config {
opts := getOptionsFromEnv()
for _, opt := range opts {
cfg = opt.ApplyHTTPOption(cfg)
}
return cfg
}
func getOptionsFromEnv() []GenericOption {
opts := []GenericOption{}
tlsConf := &tls.Config{}
DefaultEnvOptionsReader.Apply(
envconfig.WithURL("ENDPOINT", func(u *url.URL) {
opts = append(opts, withEndpointScheme(u))
opts = append(opts, newSplitOption(func(cfg Config) Config {
cfg.Traces.Endpoint = u.Host
// For OTLP/HTTP endpoint URLs without a per-signal
// configuration, the passed endpoint is used as a base URL
// and the signals are sent to these paths relative to that.
cfg.Traces.URLPath = path.Join(u.Path, DefaultTracesPath)
return cfg
}, withEndpointForGRPC(u)))
}),
envconfig.WithURL("TRACES_ENDPOINT", func(u *url.URL) {
opts = append(opts, withEndpointScheme(u))
opts = append(opts, newSplitOption(func(cfg Config) Config {
cfg.Traces.Endpoint = u.Host
// For endpoint URLs for OTLP/HTTP per-signal variables, the
// URL MUST be used as-is without any modification. The only
// exception is that if an URL contains no path part, the root
// path / MUST be used.
path := u.Path
if path == "" {
path = "/"
}
cfg.Traces.URLPath = path
return cfg
}, withEndpointForGRPC(u)))
}),
envconfig.WithCertPool("CERTIFICATE", func(p *x509.CertPool) { tlsConf.RootCAs = p }),
envconfig.WithCertPool("TRACES_CERTIFICATE", func(p *x509.CertPool) { tlsConf.RootCAs = p }),
envconfig.WithClientCert("CLIENT_CERTIFICATE", "CLIENT_KEY", func(c tls.Certificate) { tlsConf.Certificates = []tls.Certificate{c} }),
envconfig.WithClientCert("TRACES_CLIENT_CERTIFICATE", "TRACES_CLIENT_KEY", func(c tls.Certificate) { tlsConf.Certificates = []tls.Certificate{c} }),
withTLSConfig(tlsConf, func(c *tls.Config) { opts = append(opts, WithTLSClientConfig(c)) }),
envconfig.WithBool("INSECURE", func(b bool) { opts = append(opts, withInsecure(b)) }),
envconfig.WithBool("TRACES_INSECURE", func(b bool) { opts = append(opts, withInsecure(b)) }),
envconfig.WithHeaders("HEADERS", func(h map[string]string) { opts = append(opts, WithHeaders(h)) }),
envconfig.WithHeaders("TRACES_HEADERS", func(h map[string]string) { opts = append(opts, WithHeaders(h)) }),
WithEnvCompression("COMPRESSION", func(c Compression) { opts = append(opts, WithCompression(c)) }),
WithEnvCompression("TRACES_COMPRESSION", func(c Compression) { opts = append(opts, WithCompression(c)) }),
envconfig.WithDuration("TIMEOUT", func(d time.Duration) { opts = append(opts, WithTimeout(d)) }),
envconfig.WithDuration("TRACES_TIMEOUT", func(d time.Duration) { opts = append(opts, WithTimeout(d)) }),
)
return opts
}
func withEndpointScheme(u *url.URL) GenericOption {
switch strings.ToLower(u.Scheme) {
case "http", "unix":
return WithInsecure()
default:
return WithSecure()
}
}
func withEndpointForGRPC(u *url.URL) func(cfg Config) Config {
return func(cfg Config) Config {
// For OTLP/gRPC endpoints, this is the target to which the
// exporter is going to send telemetry.
cfg.Traces.Endpoint = path.Join(u.Host, u.Path)
return cfg
}
}
// WithEnvCompression retrieves the specified config and passes it to ConfigFn as a Compression.
func WithEnvCompression(n string, fn func(Compression)) func(e *envconfig.EnvOptionsReader) {
return func(e *envconfig.EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
cp := NoCompression
if v == "gzip" {
cp = GzipCompression
}
fn(cp)
}
}
}
// revive:disable-next-line:flag-parameter
func withInsecure(b bool) GenericOption {
if b {
return WithInsecure()
}
return WithSecure()
}
func withTLSConfig(c *tls.Config, fn func(*tls.Config)) func(e *envconfig.EnvOptionsReader) {
return func(e *envconfig.EnvOptionsReader) {
if c.RootCAs != nil || len(c.Certificates) > 0 {
fn(c)
}
}
}

View File

@ -0,0 +1,353 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlptrace/otlpconfig/options.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
import (
"crypto/tls"
"fmt"
"net/http"
"net/url"
"path"
"strings"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/backoff"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/grpc/encoding/gzip"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry"
"go.opentelemetry.io/otel/internal/global"
)
const (
// DefaultTracesPath is a default URL path for endpoint that
// receives spans.
DefaultTracesPath string = "/v1/traces"
// DefaultTimeout is a default max waiting time for the backend to process
// each span batch.
DefaultTimeout time.Duration = 10 * time.Second
)
type (
// HTTPTransportProxyFunc is a function that resolves which URL to use as proxy for a given request.
// This type is compatible with `http.Transport.Proxy` and can be used to set a custom proxy function to the OTLP HTTP client.
HTTPTransportProxyFunc func(*http.Request) (*url.URL, error)
SignalConfig struct {
Endpoint string
Insecure bool
TLSCfg *tls.Config
Headers map[string]string
Compression Compression
Timeout time.Duration
URLPath string
// gRPC configurations
GRPCCredentials credentials.TransportCredentials
Proxy HTTPTransportProxyFunc
}
Config struct {
// Signal specific configurations
Traces SignalConfig
RetryConfig retry.Config
// gRPC configurations
ReconnectionPeriod time.Duration
ServiceConfig string
DialOptions []grpc.DialOption
GRPCConn *grpc.ClientConn
}
)
// NewHTTPConfig returns a new Config with all settings applied from opts and
// any unset setting using the default HTTP config values.
func NewHTTPConfig(opts ...HTTPOption) Config {
cfg := Config{
Traces: SignalConfig{
Endpoint: fmt.Sprintf("%s:%d", DefaultCollectorHost, DefaultCollectorHTTPPort),
URLPath: DefaultTracesPath,
Compression: NoCompression,
Timeout: DefaultTimeout,
},
RetryConfig: retry.DefaultConfig,
}
cfg = ApplyHTTPEnvConfigs(cfg)
for _, opt := range opts {
cfg = opt.ApplyHTTPOption(cfg)
}
cfg.Traces.URLPath = cleanPath(cfg.Traces.URLPath, DefaultTracesPath)
return cfg
}
// cleanPath returns a path with all spaces trimmed and all redundancies
// removed. If urlPath is empty or cleaning it results in an empty string,
// defaultPath is returned instead.
func cleanPath(urlPath string, defaultPath string) string {
tmp := path.Clean(strings.TrimSpace(urlPath))
if tmp == "." {
return defaultPath
}
if !path.IsAbs(tmp) {
tmp = fmt.Sprintf("/%s", tmp)
}
return tmp
}
// NewGRPCConfig returns a new Config with all settings applied from opts and
// any unset setting using the default gRPC config values.
func NewGRPCConfig(opts ...GRPCOption) Config {
userAgent := "OTel OTLP Exporter Go/" + otlptrace.Version()
cfg := Config{
Traces: SignalConfig{
Endpoint: fmt.Sprintf("%s:%d", DefaultCollectorHost, DefaultCollectorGRPCPort),
URLPath: DefaultTracesPath,
Compression: NoCompression,
Timeout: DefaultTimeout,
},
RetryConfig: retry.DefaultConfig,
DialOptions: []grpc.DialOption{grpc.WithUserAgent(userAgent)},
}
cfg = ApplyGRPCEnvConfigs(cfg)
for _, opt := range opts {
cfg = opt.ApplyGRPCOption(cfg)
}
if cfg.ServiceConfig != "" {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithDefaultServiceConfig(cfg.ServiceConfig))
}
// Priroritize GRPCCredentials over Insecure (passing both is an error).
if cfg.Traces.GRPCCredentials != nil {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(cfg.Traces.GRPCCredentials))
} else if cfg.Traces.Insecure {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(insecure.NewCredentials()))
} else {
// Default to using the host's root CA.
creds := credentials.NewTLS(nil)
cfg.Traces.GRPCCredentials = creds
cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(creds))
}
if cfg.Traces.Compression == GzipCompression {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithDefaultCallOptions(grpc.UseCompressor(gzip.Name)))
}
if cfg.ReconnectionPeriod != 0 {
p := grpc.ConnectParams{
Backoff: backoff.DefaultConfig,
MinConnectTimeout: cfg.ReconnectionPeriod,
}
cfg.DialOptions = append(cfg.DialOptions, grpc.WithConnectParams(p))
}
return cfg
}
type (
// GenericOption applies an option to the HTTP or gRPC driver.
GenericOption interface {
ApplyHTTPOption(Config) Config
ApplyGRPCOption(Config) Config
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate compatibility.
private()
}
// HTTPOption applies an option to the HTTP driver.
HTTPOption interface {
ApplyHTTPOption(Config) Config
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate compatibility.
private()
}
// GRPCOption applies an option to the gRPC driver.
GRPCOption interface {
ApplyGRPCOption(Config) Config
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate compatibility.
private()
}
)
// genericOption is an option that applies the same logic
// for both gRPC and HTTP.
type genericOption struct {
fn func(Config) Config
}
func (g *genericOption) ApplyGRPCOption(cfg Config) Config {
return g.fn(cfg)
}
func (g *genericOption) ApplyHTTPOption(cfg Config) Config {
return g.fn(cfg)
}
func (genericOption) private() {}
func newGenericOption(fn func(cfg Config) Config) GenericOption {
return &genericOption{fn: fn}
}
// splitOption is an option that applies different logics
// for gRPC and HTTP.
type splitOption struct {
httpFn func(Config) Config
grpcFn func(Config) Config
}
func (g *splitOption) ApplyGRPCOption(cfg Config) Config {
return g.grpcFn(cfg)
}
func (g *splitOption) ApplyHTTPOption(cfg Config) Config {
return g.httpFn(cfg)
}
func (splitOption) private() {}
func newSplitOption(httpFn func(cfg Config) Config, grpcFn func(cfg Config) Config) GenericOption {
return &splitOption{httpFn: httpFn, grpcFn: grpcFn}
}
// httpOption is an option that is only applied to the HTTP driver.
type httpOption struct {
fn func(Config) Config
}
func (h *httpOption) ApplyHTTPOption(cfg Config) Config {
return h.fn(cfg)
}
func (httpOption) private() {}
func NewHTTPOption(fn func(cfg Config) Config) HTTPOption {
return &httpOption{fn: fn}
}
// grpcOption is an option that is only applied to the gRPC driver.
type grpcOption struct {
fn func(Config) Config
}
func (h *grpcOption) ApplyGRPCOption(cfg Config) Config {
return h.fn(cfg)
}
func (grpcOption) private() {}
func NewGRPCOption(fn func(cfg Config) Config) GRPCOption {
return &grpcOption{fn: fn}
}
// Generic Options
// WithEndpoint configures the trace host and port only; endpoint should
// resemble "example.com" or "localhost:4317". To configure the scheme and path,
// use WithEndpointURL.
func WithEndpoint(endpoint string) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Endpoint = endpoint
return cfg
})
}
// WithEndpointURL configures the trace scheme, host, port, and path; the
// provided value should resemble "https://example.com:4318/v1/traces".
func WithEndpointURL(v string) GenericOption {
return newGenericOption(func(cfg Config) Config {
u, err := url.Parse(v)
if err != nil {
global.Error(err, "otlptrace: parse endpoint url", "url", v)
return cfg
}
cfg.Traces.Endpoint = u.Host
cfg.Traces.URLPath = u.Path
if u.Scheme != "https" {
cfg.Traces.Insecure = true
}
return cfg
})
}
func WithCompression(compression Compression) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Compression = compression
return cfg
})
}
func WithURLPath(urlPath string) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.URLPath = urlPath
return cfg
})
}
func WithRetry(rc retry.Config) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.RetryConfig = rc
return cfg
})
}
func WithTLSClientConfig(tlsCfg *tls.Config) GenericOption {
return newSplitOption(func(cfg Config) Config {
cfg.Traces.TLSCfg = tlsCfg.Clone()
return cfg
}, func(cfg Config) Config {
cfg.Traces.GRPCCredentials = credentials.NewTLS(tlsCfg)
return cfg
})
}
func WithInsecure() GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Insecure = true
return cfg
})
}
func WithSecure() GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Insecure = false
return cfg
})
}
func WithHeaders(headers map[string]string) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Headers = headers
return cfg
})
}
func WithTimeout(duration time.Duration) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Timeout = duration
return cfg
})
}
func WithProxy(pf HTTPTransportProxyFunc) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Proxy = pf
return cfg
})
}

View File

@ -0,0 +1,40 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlptrace/otlpconfig/optiontypes.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
const (
// DefaultCollectorGRPCPort is the default gRPC port of the collector.
DefaultCollectorGRPCPort uint16 = 4317
// DefaultCollectorHTTPPort is the default HTTP port of the collector.
DefaultCollectorHTTPPort uint16 = 4318
// DefaultCollectorHost is the host address the Exporter will attempt
// connect to if no collector address is provided.
DefaultCollectorHost string = "localhost"
)
// Compression describes the compression used for payloads sent to the
// collector.
type Compression int
const (
// NoCompression tells the driver to send payloads without
// compression.
NoCompression Compression = iota
// GzipCompression tells the driver to send payloads after
// compressing them with gzip.
GzipCompression
)
// Marshaler describes the kind of message format sent to the collector.
type Marshaler int
const (
// MarshalProto tells the driver to send using the protobuf binary format.
MarshalProto Marshaler = iota
// MarshalJSON tells the driver to send using json format.
MarshalJSON
)

View File

@ -0,0 +1,26 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlptrace/otlpconfig/tls.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
import (
"crypto/tls"
"crypto/x509"
"errors"
)
// CreateTLSConfig creates a tls.Config from a raw certificate bytes
// to verify a server certificate.
func CreateTLSConfig(certBytes []byte) (*tls.Config, error) {
cp := x509.NewCertPool()
if ok := cp.AppendCertsFromPEM(certBytes); !ok {
return nil, errors.New("failed to append certificate to the cert pool")
}
return &tls.Config{
RootCAs: cp,
}, nil
}

View File

@ -0,0 +1,56 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/partialsuccess.go
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package internal // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal"
import "fmt"
// PartialSuccess represents the underlying error for all handling
// OTLP partial success messages. Use `errors.Is(err,
// PartialSuccess{})` to test whether an error passed to the OTel
// error handler belongs to this category.
type PartialSuccess struct {
ErrorMessage string
RejectedItems int64
RejectedKind string
}
var _ error = PartialSuccess{}
// Error implements the error interface.
func (ps PartialSuccess) Error() string {
msg := ps.ErrorMessage
if msg == "" {
msg = "empty message"
}
return fmt.Sprintf("OTLP partial success: %s (%d %s rejected)", msg, ps.RejectedItems, ps.RejectedKind)
}
// Is supports the errors.Is() interface.
func (ps PartialSuccess) Is(err error) bool {
_, ok := err.(PartialSuccess)
return ok
}
// TracePartialSuccessError returns an error describing a partial success
// response for the trace signal.
func TracePartialSuccessError(itemsRejected int64, errorMessage string) error {
return PartialSuccess{
ErrorMessage: errorMessage,
RejectedItems: itemsRejected,
RejectedKind: "spans",
}
}
// MetricPartialSuccessError returns an error describing a partial success
// response for the metric signal.
func MetricPartialSuccessError(itemsRejected int64, errorMessage string) error {
return PartialSuccess{
ErrorMessage: errorMessage,
RejectedItems: itemsRejected,
RejectedKind: "metric data points",
}
}

View File

@ -0,0 +1,145 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/retry/retry.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package retry provides request retry functionality that can perform
// configurable exponential backoff for transient errors and honor any
// explicit throttle responses received.
package retry // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry"
import (
"context"
"fmt"
"time"
"github.com/cenkalti/backoff/v4"
)
// DefaultConfig are the recommended defaults to use.
var DefaultConfig = Config{
Enabled: true,
InitialInterval: 5 * time.Second,
MaxInterval: 30 * time.Second,
MaxElapsedTime: time.Minute,
}
// Config defines configuration for retrying batches in case of export failure
// using an exponential backoff.
type Config struct {
// Enabled indicates whether to not retry sending batches in case of
// export failure.
Enabled bool
// InitialInterval the time to wait after the first failure before
// retrying.
InitialInterval time.Duration
// MaxInterval is the upper bound on backoff interval. Once this value is
// reached the delay between consecutive retries will always be
// `MaxInterval`.
MaxInterval time.Duration
// MaxElapsedTime is the maximum amount of time (including retries) spent
// trying to send a request/batch. Once this value is reached, the data
// is discarded.
MaxElapsedTime time.Duration
}
// RequestFunc wraps a request with retry logic.
type RequestFunc func(context.Context, func(context.Context) error) error
// EvaluateFunc returns if an error is retry-able and if an explicit throttle
// duration should be honored that was included in the error.
//
// The function must return true if the error argument is retry-able,
// otherwise it must return false for the first return parameter.
//
// The function must return a non-zero time.Duration if the error contains
// explicit throttle duration that should be honored, otherwise it must return
// a zero valued time.Duration.
type EvaluateFunc func(error) (bool, time.Duration)
// RequestFunc returns a RequestFunc using the evaluate function to determine
// if requests can be retried and based on the exponential backoff
// configuration of c.
func (c Config) RequestFunc(evaluate EvaluateFunc) RequestFunc {
if !c.Enabled {
return func(ctx context.Context, fn func(context.Context) error) error {
return fn(ctx)
}
}
return func(ctx context.Context, fn func(context.Context) error) error {
// Do not use NewExponentialBackOff since it calls Reset and the code here
// must call Reset after changing the InitialInterval (this saves an
// unnecessary call to Now).
b := &backoff.ExponentialBackOff{
InitialInterval: c.InitialInterval,
RandomizationFactor: backoff.DefaultRandomizationFactor,
Multiplier: backoff.DefaultMultiplier,
MaxInterval: c.MaxInterval,
MaxElapsedTime: c.MaxElapsedTime,
Stop: backoff.Stop,
Clock: backoff.SystemClock,
}
b.Reset()
for {
err := fn(ctx)
if err == nil {
return nil
}
retryable, throttle := evaluate(err)
if !retryable {
return err
}
bOff := b.NextBackOff()
if bOff == backoff.Stop {
return fmt.Errorf("max retry time elapsed: %w", err)
}
// Wait for the greater of the backoff or throttle delay.
var delay time.Duration
if bOff > throttle {
delay = bOff
} else {
elapsed := b.GetElapsedTime()
if b.MaxElapsedTime != 0 && elapsed+throttle > b.MaxElapsedTime {
return fmt.Errorf("max retry time would elapse: %w", err)
}
delay = throttle
}
if ctxErr := waitFunc(ctx, delay); ctxErr != nil {
return fmt.Errorf("%w: %w", ctxErr, err)
}
}
}
}
// Allow override for testing.
var waitFunc = wait
// wait takes the caller's context, and the amount of time to wait. It will
// return nil if the timer fires before or at the same time as the context's
// deadline. This indicates that the call can be retried.
func wait(ctx context.Context, delay time.Duration) error {
timer := time.NewTimer(delay)
defer timer.Stop()
select {
case <-ctx.Done():
// Handle the case where the timer and context deadline end
// simultaneously by prioritizing the timer expiration nil value
// response.
select {
case <-timer.C:
default:
return ctx.Err()
}
case <-timer.C:
}
return nil
}

View File

@ -0,0 +1,210 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlptracegrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
import (
"fmt"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry"
)
// Option applies an option to the gRPC driver.
type Option interface {
applyGRPCOption(otlpconfig.Config) otlpconfig.Config
}
func asGRPCOptions(opts []Option) []otlpconfig.GRPCOption {
converted := make([]otlpconfig.GRPCOption, len(opts))
for i, o := range opts {
converted[i] = otlpconfig.NewGRPCOption(o.applyGRPCOption)
}
return converted
}
// RetryConfig defines configuration for retrying export of span batches that
// failed to be received by the target endpoint.
//
// This configuration does not define any network retry strategy. That is
// entirely handled by the gRPC ClientConn.
type RetryConfig retry.Config
type wrappedOption struct {
otlpconfig.GRPCOption
}
func (w wrappedOption) applyGRPCOption(cfg otlpconfig.Config) otlpconfig.Config {
return w.ApplyGRPCOption(cfg)
}
// WithInsecure disables client transport security for the exporter's gRPC
// connection just like grpc.WithInsecure()
// (https://pkg.go.dev/google.golang.org/grpc#WithInsecure) does. Note, by
// default, client security is required unless WithInsecure is used.
//
// This option has no effect if WithGRPCConn is used.
func WithInsecure() Option {
return wrappedOption{otlpconfig.WithInsecure()}
}
// WithEndpoint sets the target endpoint (host and port) the Exporter will
// connect to. The provided endpoint should resemble "example.com:4317" (no
// scheme or path).
//
// If the OTEL_EXPORTER_OTLP_ENDPOINT or OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
// environment variable is set, and this option is not passed, that variable
// value will be used. If both environment variables are set,
// OTEL_EXPORTER_OTLP_TRACES_ENDPOINT will take precedence. If an environment
// variable is set, and this option is passed, this option will take precedence.
//
// If both this option and WithEndpointURL are used, the last used option will
// take precedence.
//
// By default, if an environment variable is not set, and this option is not
// passed, "localhost:4317" will be used.
//
// This option has no effect if WithGRPCConn is used.
func WithEndpoint(endpoint string) Option {
return wrappedOption{otlpconfig.WithEndpoint(endpoint)}
}
// WithEndpointURL sets the target endpoint URL (scheme, host, port, path)
// the Exporter will connect to. The provided endpoint URL should resemble
// "https://example.com:4318/v1/traces".
//
// If the OTEL_EXPORTER_OTLP_ENDPOINT or OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
// environment variable is set, and this option is not passed, that variable
// value will be used. If both environment variables are set,
// OTEL_EXPORTER_OTLP_TRACES_ENDPOINT will take precedence. If an environment
// variable is set, and this option is passed, this option will take precedence.
//
// If both this option and WithEndpoint are used, the last used option will
// take precedence.
//
// If an invalid URL is provided, the default value will be kept.
//
// By default, if an environment variable is not set, and this option is not
// passed, "https://localhost:4317/v1/traces" will be used.
//
// This option has no effect if WithGRPCConn is used.
func WithEndpointURL(u string) Option {
return wrappedOption{otlpconfig.WithEndpointURL(u)}
}
// WithReconnectionPeriod set the minimum amount of time between connection
// attempts to the target endpoint.
//
// This option has no effect if WithGRPCConn is used.
func WithReconnectionPeriod(rp time.Duration) Option {
return wrappedOption{otlpconfig.NewGRPCOption(func(cfg otlpconfig.Config) otlpconfig.Config {
cfg.ReconnectionPeriod = rp
return cfg
})}
}
func compressorToCompression(compressor string) otlpconfig.Compression {
if compressor == "gzip" {
return otlpconfig.GzipCompression
}
otel.Handle(fmt.Errorf("invalid compression type: '%s', using no compression as default", compressor))
return otlpconfig.NoCompression
}
// WithCompressor sets the compressor for the gRPC client to use when sending
// requests. Supported compressor values: "gzip".
func WithCompressor(compressor string) Option {
return wrappedOption{otlpconfig.WithCompression(compressorToCompression(compressor))}
}
// WithHeaders will send the provided headers with each gRPC requests.
func WithHeaders(headers map[string]string) Option {
return wrappedOption{otlpconfig.WithHeaders(headers)}
}
// WithTLSCredentials allows the connection to use TLS credentials when
// talking to the server. It takes in grpc.TransportCredentials instead of say
// a Certificate file or a tls.Certificate, because the retrieving of these
// credentials can be done in many ways e.g. plain file, in code tls.Config or
// by certificate rotation, so it is up to the caller to decide what to use.
//
// This option has no effect if WithGRPCConn is used.
func WithTLSCredentials(creds credentials.TransportCredentials) Option {
return wrappedOption{otlpconfig.NewGRPCOption(func(cfg otlpconfig.Config) otlpconfig.Config {
cfg.Traces.GRPCCredentials = creds
return cfg
})}
}
// WithServiceConfig defines the default gRPC service config used.
//
// This option has no effect if WithGRPCConn is used.
func WithServiceConfig(serviceConfig string) Option {
return wrappedOption{otlpconfig.NewGRPCOption(func(cfg otlpconfig.Config) otlpconfig.Config {
cfg.ServiceConfig = serviceConfig
return cfg
})}
}
// WithDialOption sets explicit grpc.DialOptions to use when making a
// connection. The options here are appended to the internal grpc.DialOptions
// used so they will take precedence over any other internal grpc.DialOptions
// they might conflict with.
// The [grpc.WithBlock], [grpc.WithTimeout], and [grpc.WithReturnConnectionError]
// grpc.DialOptions are ignored.
//
// This option has no effect if WithGRPCConn is used.
func WithDialOption(opts ...grpc.DialOption) Option {
return wrappedOption{otlpconfig.NewGRPCOption(func(cfg otlpconfig.Config) otlpconfig.Config {
cfg.DialOptions = opts
return cfg
})}
}
// WithGRPCConn sets conn as the gRPC ClientConn used for all communication.
//
// This option takes precedence over any other option that relates to
// establishing or persisting a gRPC connection to a target endpoint. Any
// other option of those types passed will be ignored.
//
// It is the callers responsibility to close the passed conn. The client
// Shutdown method will not close this connection.
func WithGRPCConn(conn *grpc.ClientConn) Option {
return wrappedOption{otlpconfig.NewGRPCOption(func(cfg otlpconfig.Config) otlpconfig.Config {
cfg.GRPCConn = conn
return cfg
})}
}
// WithTimeout sets the max amount of time a client will attempt to export a
// batch of spans. This takes precedence over any retry settings defined with
// WithRetry, once this time limit has been reached the export is abandoned
// and the batch of spans is dropped.
//
// If unset, the default timeout will be set to 10 seconds.
func WithTimeout(duration time.Duration) Option {
return wrappedOption{otlpconfig.WithTimeout(duration)}
}
// WithRetry sets the retry policy for transient retryable errors that may be
// returned by the target endpoint when exporting a batch of spans.
//
// If the target endpoint responds with not only a retryable error, but
// explicitly returns a backoff time in the response. That time will take
// precedence over these settings.
//
// These settings do not define any network retry strategy. That is entirely
// handled by the gRPC ClientConn.
//
// If unset, the default retry policy will be used. It will retry the export
// 5 seconds after receiving a retryable error and increase exponentially
// after each error for no more than a total time of 1 minute.
func WithRetry(settings RetryConfig) Option {
return wrappedOption{otlpconfig.WithRetry(retry.Config(settings))}
}

View File

@ -0,0 +1,9 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
// Version is the current release version of the OpenTelemetry OTLP trace exporter in use.
func Version() string {
return "1.28.0"
}

30
vendor/go.opentelemetry.io/otel/get_main_pkgs.sh generated vendored Normal file
View File

@ -0,0 +1,30 @@
#!/usr/bin/env bash
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
set -euo pipefail
top_dir='.'
if [[ $# -gt 0 ]]; then
top_dir="${1}"
fi
p=$(pwd)
mod_dirs=()
# Note `mapfile` does not exist in older bash versions:
# https://stackoverflow.com/questions/41475261/need-alternative-to-readarray-mapfile-for-script-on-older-version-of-bash
while IFS= read -r line; do
mod_dirs+=("$line")
done < <(find "${top_dir}" -type f -name 'go.mod' -exec dirname {} \; | sort)
for mod_dir in "${mod_dirs[@]}"; do
cd "${mod_dir}"
while IFS= read -r line; do
echo ".${line#${p}}"
done < <(go list --find -f '{{.Name}}|{{.Dir}}' ./... | grep '^main|' | cut -f 2- -d '|')
cd "${p}"
done

33
vendor/go.opentelemetry.io/otel/handler.go generated vendored Normal file
View File

@ -0,0 +1,33 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otel // import "go.opentelemetry.io/otel"
import (
"go.opentelemetry.io/otel/internal/global"
)
// Compile-time check global.ErrDelegator implements ErrorHandler.
var _ ErrorHandler = (*global.ErrDelegator)(nil)
// GetErrorHandler returns the global ErrorHandler instance.
//
// The default ErrorHandler instance returned will log all errors to STDERR
// until an override ErrorHandler is set with SetErrorHandler. All
// ErrorHandler returned prior to this will automatically forward errors to
// the set instance instead of logging.
//
// Subsequent calls to SetErrorHandler after the first will not forward errors
// to the new ErrorHandler for prior returned instances.
func GetErrorHandler() ErrorHandler { return global.GetErrorHandler() }
// SetErrorHandler sets the global ErrorHandler to h.
//
// The first time this is called all ErrorHandler previously returned from
// GetErrorHandler will send errors to h instead of the default logging
// ErrorHandler. Subsequent calls will set the global ErrorHandler, but not
// delegate errors to h.
func SetErrorHandler(h ErrorHandler) { global.SetErrorHandler(h) }
// Handle is a convenience function for GetErrorHandler().Handle(err).
func Handle(err error) { global.GetErrorHandler().Handle(err) }

View File

@ -0,0 +1,100 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package attribute provide several helper functions for some commonly used
logic of processing attributes.
*/
package attribute // import "go.opentelemetry.io/otel/internal/attribute"
import (
"reflect"
)
// BoolSliceValue converts a bool slice into an array with same elements as slice.
func BoolSliceValue(v []bool) interface{} {
var zero bool
cp := reflect.New(reflect.ArrayOf(len(v), reflect.TypeOf(zero))).Elem()
reflect.Copy(cp, reflect.ValueOf(v))
return cp.Interface()
}
// Int64SliceValue converts an int64 slice into an array with same elements as slice.
func Int64SliceValue(v []int64) interface{} {
var zero int64
cp := reflect.New(reflect.ArrayOf(len(v), reflect.TypeOf(zero))).Elem()
reflect.Copy(cp, reflect.ValueOf(v))
return cp.Interface()
}
// Float64SliceValue converts a float64 slice into an array with same elements as slice.
func Float64SliceValue(v []float64) interface{} {
var zero float64
cp := reflect.New(reflect.ArrayOf(len(v), reflect.TypeOf(zero))).Elem()
reflect.Copy(cp, reflect.ValueOf(v))
return cp.Interface()
}
// StringSliceValue converts a string slice into an array with same elements as slice.
func StringSliceValue(v []string) interface{} {
var zero string
cp := reflect.New(reflect.ArrayOf(len(v), reflect.TypeOf(zero))).Elem()
reflect.Copy(cp, reflect.ValueOf(v))
return cp.Interface()
}
// AsBoolSlice converts a bool array into a slice into with same elements as array.
func AsBoolSlice(v interface{}) []bool {
rv := reflect.ValueOf(v)
if rv.Type().Kind() != reflect.Array {
return nil
}
var zero bool
correctLen := rv.Len()
correctType := reflect.ArrayOf(correctLen, reflect.TypeOf(zero))
cpy := reflect.New(correctType)
_ = reflect.Copy(cpy.Elem(), rv)
return cpy.Elem().Slice(0, correctLen).Interface().([]bool)
}
// AsInt64Slice converts an int64 array into a slice into with same elements as array.
func AsInt64Slice(v interface{}) []int64 {
rv := reflect.ValueOf(v)
if rv.Type().Kind() != reflect.Array {
return nil
}
var zero int64
correctLen := rv.Len()
correctType := reflect.ArrayOf(correctLen, reflect.TypeOf(zero))
cpy := reflect.New(correctType)
_ = reflect.Copy(cpy.Elem(), rv)
return cpy.Elem().Slice(0, correctLen).Interface().([]int64)
}
// AsFloat64Slice converts a float64 array into a slice into with same elements as array.
func AsFloat64Slice(v interface{}) []float64 {
rv := reflect.ValueOf(v)
if rv.Type().Kind() != reflect.Array {
return nil
}
var zero float64
correctLen := rv.Len()
correctType := reflect.ArrayOf(correctLen, reflect.TypeOf(zero))
cpy := reflect.New(correctType)
_ = reflect.Copy(cpy.Elem(), rv)
return cpy.Elem().Slice(0, correctLen).Interface().([]float64)
}
// AsStringSlice converts a string array into a slice into with same elements as array.
func AsStringSlice(v interface{}) []string {
rv := reflect.ValueOf(v)
if rv.Type().Kind() != reflect.Array {
return nil
}
var zero string
correctLen := rv.Len()
correctType := reflect.ArrayOf(correctLen, reflect.TypeOf(zero))
cpy := reflect.New(correctType)
_ = reflect.Copy(cpy.Elem(), rv)
return cpy.Elem().Slice(0, correctLen).Interface().([]string)
}

View File

@ -0,0 +1,32 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package baggage provides base types and functionality to store and retrieve
baggage in Go context. This package exists because the OpenTracing bridge to
OpenTelemetry needs to synchronize state whenever baggage for a context is
modified and that context contains an OpenTracing span. If it were not for
this need this package would not need to exist and the
`go.opentelemetry.io/otel/baggage` package would be the singular place where
W3C baggage is handled.
*/
package baggage // import "go.opentelemetry.io/otel/internal/baggage"
// List is the collection of baggage members. The W3C allows for duplicates,
// but OpenTelemetry does not, therefore, this is represented as a map.
type List map[string]Item
// Item is the value and metadata properties part of a list-member.
type Item struct {
Value string
Properties []Property
}
// Property is a metadata entry for a list-member.
type Property struct {
Key, Value string
// HasValue indicates if a zero-value value means the property does not
// have a value or if it was the zero-value.
HasValue bool
}

View File

@ -0,0 +1,81 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package baggage // import "go.opentelemetry.io/otel/internal/baggage"
import "context"
type baggageContextKeyType int
const baggageKey baggageContextKeyType = iota
// SetHookFunc is a callback called when storing baggage in the context.
type SetHookFunc func(context.Context, List) context.Context
// GetHookFunc is a callback called when getting baggage from the context.
type GetHookFunc func(context.Context, List) List
type baggageState struct {
list List
setHook SetHookFunc
getHook GetHookFunc
}
// ContextWithSetHook returns a copy of parent with hook configured to be
// invoked every time ContextWithBaggage is called.
//
// Passing nil SetHookFunc creates a context with no set hook to call.
func ContextWithSetHook(parent context.Context, hook SetHookFunc) context.Context {
var s baggageState
if v, ok := parent.Value(baggageKey).(baggageState); ok {
s = v
}
s.setHook = hook
return context.WithValue(parent, baggageKey, s)
}
// ContextWithGetHook returns a copy of parent with hook configured to be
// invoked every time FromContext is called.
//
// Passing nil GetHookFunc creates a context with no get hook to call.
func ContextWithGetHook(parent context.Context, hook GetHookFunc) context.Context {
var s baggageState
if v, ok := parent.Value(baggageKey).(baggageState); ok {
s = v
}
s.getHook = hook
return context.WithValue(parent, baggageKey, s)
}
// ContextWithList returns a copy of parent with baggage. Passing nil list
// returns a context without any baggage.
func ContextWithList(parent context.Context, list List) context.Context {
var s baggageState
if v, ok := parent.Value(baggageKey).(baggageState); ok {
s = v
}
s.list = list
ctx := context.WithValue(parent, baggageKey, s)
if s.setHook != nil {
ctx = s.setHook(ctx, list)
}
return ctx
}
// ListFromContext returns the baggage contained in ctx.
func ListFromContext(ctx context.Context) List {
switch v := ctx.Value(baggageKey).(type) {
case baggageState:
if v.getHook != nil {
return v.getHook(ctx, v.list)
}
return v.list
default:
return nil
}
}

18
vendor/go.opentelemetry.io/otel/internal/gen.go generated vendored Normal file
View File

@ -0,0 +1,18 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package internal // import "go.opentelemetry.io/otel/internal"
//go:generate gotmpl --body=./shared/matchers/expectation.go.tmpl "--data={}" --out=matchers/expectation.go
//go:generate gotmpl --body=./shared/matchers/expecter.go.tmpl "--data={}" --out=matchers/expecter.go
//go:generate gotmpl --body=./shared/matchers/temporal_matcher.go.tmpl "--data={}" --out=matchers/temporal_matcher.go
//go:generate gotmpl --body=./shared/internaltest/alignment.go.tmpl "--data={}" --out=internaltest/alignment.go
//go:generate gotmpl --body=./shared/internaltest/env.go.tmpl "--data={}" --out=internaltest/env.go
//go:generate gotmpl --body=./shared/internaltest/env_test.go.tmpl "--data={}" --out=internaltest/env_test.go
//go:generate gotmpl --body=./shared/internaltest/errors.go.tmpl "--data={}" --out=internaltest/errors.go
//go:generate gotmpl --body=./shared/internaltest/harness.go.tmpl "--data={\"matchersImportPath\": \"go.opentelemetry.io/otel/internal/matchers\"}" --out=internaltest/harness.go
//go:generate gotmpl --body=./shared/internaltest/text_map_carrier.go.tmpl "--data={}" --out=internaltest/text_map_carrier.go
//go:generate gotmpl --body=./shared/internaltest/text_map_carrier_test.go.tmpl "--data={}" --out=internaltest/text_map_carrier_test.go
//go:generate gotmpl --body=./shared/internaltest/text_map_propagator.go.tmpl "--data={}" --out=internaltest/text_map_propagator.go
//go:generate gotmpl --body=./shared/internaltest/text_map_propagator_test.go.tmpl "--data={}" --out=internaltest/text_map_propagator_test.go

View File

@ -0,0 +1,36 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package global // import "go.opentelemetry.io/otel/internal/global"
import (
"log"
"sync/atomic"
)
// ErrorHandler handles irremediable events.
type ErrorHandler interface {
// Handle handles any error deemed irremediable by an OpenTelemetry
// component.
Handle(error)
}
type ErrDelegator struct {
delegate atomic.Pointer[ErrorHandler]
}
// Compile-time check that delegator implements ErrorHandler.
var _ ErrorHandler = (*ErrDelegator)(nil)
func (d *ErrDelegator) Handle(err error) {
if eh := d.delegate.Load(); eh != nil {
(*eh).Handle(err)
return
}
log.Print(err)
}
// setDelegate sets the ErrorHandler delegate.
func (d *ErrDelegator) setDelegate(eh ErrorHandler) {
d.delegate.Store(&eh)
}

Some files were not shown because too many files have changed in this diff Show More