chore: vendor

This commit is contained in:
2024-08-04 11:06:58 +02:00
parent 2a5985e44e
commit 04aec8232f
3557 changed files with 981078 additions and 1 deletions

201
vendor/go.opentelemetry.io/otel/sdk/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

3
vendor/go.opentelemetry.io/otel/sdk/README.md generated vendored Normal file
View File

@ -0,0 +1,3 @@
# SDK
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/sdk)](https://pkg.go.dev/go.opentelemetry.io/otel/sdk)

View File

@ -0,0 +1,3 @@
# SDK Instrumentation
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/sdk/instrumentation)](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/instrumentation)

View File

@ -0,0 +1,13 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package instrumentation provides types to represent the code libraries that
// provide OpenTelemetry instrumentation. These types are used in the
// OpenTelemetry signal pipelines to identify the source of telemetry.
//
// See
// https://github.com/open-telemetry/oteps/blob/d226b677d73a785523fe9b9701be13225ebc528d/text/0083-component.md
// and
// https://github.com/open-telemetry/oteps/blob/d226b677d73a785523fe9b9701be13225ebc528d/text/0201-scope-attributes.md
// for more information.
package instrumentation // import "go.opentelemetry.io/otel/sdk/instrumentation"

View File

@ -0,0 +1,8 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package instrumentation // import "go.opentelemetry.io/otel/sdk/instrumentation"
// Library represents the instrumentation library.
// Deprecated: please use Scope instead.
type Library = Scope

View File

@ -0,0 +1,15 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package instrumentation // import "go.opentelemetry.io/otel/sdk/instrumentation"
// Scope represents the instrumentation scope.
type Scope struct {
// Name is the name of the instrumentation scope. This should be the
// Go package name of that scope.
Name string
// Version is the version of the instrumentation scope.
Version string
// SchemaURL of the telemetry emitted by the scope.
SchemaURL string
}

166
vendor/go.opentelemetry.io/otel/sdk/internal/env/env.go generated vendored Normal file
View File

@ -0,0 +1,166 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package env // import "go.opentelemetry.io/otel/sdk/internal/env"
import (
"os"
"strconv"
"go.opentelemetry.io/otel/internal/global"
)
// Environment variable names.
const (
// BatchSpanProcessorScheduleDelayKey is the delay interval between two
// consecutive exports (i.e. 5000).
BatchSpanProcessorScheduleDelayKey = "OTEL_BSP_SCHEDULE_DELAY"
// BatchSpanProcessorExportTimeoutKey is the maximum allowed time to
// export data (i.e. 3000).
BatchSpanProcessorExportTimeoutKey = "OTEL_BSP_EXPORT_TIMEOUT"
// BatchSpanProcessorMaxQueueSizeKey is the maximum queue size (i.e. 2048).
BatchSpanProcessorMaxQueueSizeKey = "OTEL_BSP_MAX_QUEUE_SIZE"
// BatchSpanProcessorMaxExportBatchSizeKey is the maximum batch size (i.e.
// 512). Note: it must be less than or equal to
// BatchSpanProcessorMaxQueueSize.
BatchSpanProcessorMaxExportBatchSizeKey = "OTEL_BSP_MAX_EXPORT_BATCH_SIZE"
// AttributeValueLengthKey is the maximum allowed attribute value size.
AttributeValueLengthKey = "OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT"
// AttributeCountKey is the maximum allowed span attribute count.
AttributeCountKey = "OTEL_ATTRIBUTE_COUNT_LIMIT"
// SpanAttributeValueLengthKey is the maximum allowed attribute value size
// for a span.
SpanAttributeValueLengthKey = "OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT"
// SpanAttributeCountKey is the maximum allowed span attribute count for a
// span.
SpanAttributeCountKey = "OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT"
// SpanEventCountKey is the maximum allowed span event count.
SpanEventCountKey = "OTEL_SPAN_EVENT_COUNT_LIMIT"
// SpanEventAttributeCountKey is the maximum allowed attribute per span
// event count.
SpanEventAttributeCountKey = "OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT"
// SpanLinkCountKey is the maximum allowed span link count.
SpanLinkCountKey = "OTEL_SPAN_LINK_COUNT_LIMIT"
// SpanLinkAttributeCountKey is the maximum allowed attribute per span
// link count.
SpanLinkAttributeCountKey = "OTEL_LINK_ATTRIBUTE_COUNT_LIMIT"
)
// firstInt returns the value of the first matching environment variable from
// keys. If the value is not an integer or no match is found, defaultValue is
// returned.
func firstInt(defaultValue int, keys ...string) int {
for _, key := range keys {
value := os.Getenv(key)
if value == "" {
continue
}
intValue, err := strconv.Atoi(value)
if err != nil {
global.Info("Got invalid value, number value expected.", key, value)
return defaultValue
}
return intValue
}
return defaultValue
}
// IntEnvOr returns the int value of the environment variable with name key if
// it exists, it is not empty, and the value is an int. Otherwise, defaultValue is returned.
func IntEnvOr(key string, defaultValue int) int {
value := os.Getenv(key)
if value == "" {
return defaultValue
}
intValue, err := strconv.Atoi(value)
if err != nil {
global.Info("Got invalid value, number value expected.", key, value)
return defaultValue
}
return intValue
}
// BatchSpanProcessorScheduleDelay returns the environment variable value for
// the OTEL_BSP_SCHEDULE_DELAY key if it exists, otherwise defaultValue is
// returned.
func BatchSpanProcessorScheduleDelay(defaultValue int) int {
return IntEnvOr(BatchSpanProcessorScheduleDelayKey, defaultValue)
}
// BatchSpanProcessorExportTimeout returns the environment variable value for
// the OTEL_BSP_EXPORT_TIMEOUT key if it exists, otherwise defaultValue is
// returned.
func BatchSpanProcessorExportTimeout(defaultValue int) int {
return IntEnvOr(BatchSpanProcessorExportTimeoutKey, defaultValue)
}
// BatchSpanProcessorMaxQueueSize returns the environment variable value for
// the OTEL_BSP_MAX_QUEUE_SIZE key if it exists, otherwise defaultValue is
// returned.
func BatchSpanProcessorMaxQueueSize(defaultValue int) int {
return IntEnvOr(BatchSpanProcessorMaxQueueSizeKey, defaultValue)
}
// BatchSpanProcessorMaxExportBatchSize returns the environment variable value for
// the OTEL_BSP_MAX_EXPORT_BATCH_SIZE key if it exists, otherwise defaultValue
// is returned.
func BatchSpanProcessorMaxExportBatchSize(defaultValue int) int {
return IntEnvOr(BatchSpanProcessorMaxExportBatchSizeKey, defaultValue)
}
// SpanAttributeValueLength returns the environment variable value for the
// OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT key if it exists. Otherwise, the
// environment variable value for OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT is
// returned or defaultValue if that is not set.
func SpanAttributeValueLength(defaultValue int) int {
return firstInt(defaultValue, SpanAttributeValueLengthKey, AttributeValueLengthKey)
}
// SpanAttributeCount returns the environment variable value for the
// OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT key if it exists. Otherwise, the
// environment variable value for OTEL_ATTRIBUTE_COUNT_LIMIT is returned or
// defaultValue if that is not set.
func SpanAttributeCount(defaultValue int) int {
return firstInt(defaultValue, SpanAttributeCountKey, AttributeCountKey)
}
// SpanEventCount returns the environment variable value for the
// OTEL_SPAN_EVENT_COUNT_LIMIT key if it exists, otherwise defaultValue is
// returned.
func SpanEventCount(defaultValue int) int {
return IntEnvOr(SpanEventCountKey, defaultValue)
}
// SpanEventAttributeCount returns the environment variable value for the
// OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT key if it exists, otherwise defaultValue
// is returned.
func SpanEventAttributeCount(defaultValue int) int {
return IntEnvOr(SpanEventAttributeCountKey, defaultValue)
}
// SpanLinkCount returns the environment variable value for the
// OTEL_SPAN_LINK_COUNT_LIMIT key if it exists, otherwise defaultValue is
// returned.
func SpanLinkCount(defaultValue int) int {
return IntEnvOr(SpanLinkCountKey, defaultValue)
}
// SpanLinkAttributeCount returns the environment variable value for the
// OTEL_LINK_ATTRIBUTE_COUNT_LIMIT key if it exists, otherwise defaultValue is
// returned.
func SpanLinkAttributeCount(defaultValue int) int {
return IntEnvOr(SpanLinkAttributeCountKey, defaultValue)
}

View File

@ -0,0 +1,46 @@
# Experimental Features
The SDK contains features that have not yet stabilized in the OpenTelemetry specification.
These features are added to the OpenTelemetry Go SDK prior to stabilization in the specification so that users can start experimenting with them and provide feedback.
These feature may change in backwards incompatible ways as feedback is applied.
See the [Compatibility and Stability](#compatibility-and-stability) section for more information.
## Features
- [Resource](#resource)
### Resource
[OpenTelemetry resource semantic conventions] include many attribute definitions that are defined as experimental.
To have experimental semantic conventions be added by [resource detectors] set the `OTEL_GO_X_RESOURCE` environment variable.
The value set must be the case-insensitive string of `"true"` to enable the feature.
All other values are ignored.
<!-- TODO: document what attributes are added by which detector -->
[OpenTelemetry resource semantic conventions]: https://opentelemetry.io/docs/specs/semconv/resource/
[resource detectors]: https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#Detector
#### Examples
Enable experimental resource semantic conventions.
```console
export OTEL_GO_X_RESOURCE=true
```
Disable experimental resource semantic conventions.
```console
unset OTEL_GO_X_RESOURCE
```
## Compatibility and Stability
Experimental features do not fall within the scope of the OpenTelemetry Go versioning and stability [policy](../../../VERSIONING.md).
These features may be removed or modified in successive version releases, including patch versions.
When an experimental feature is promoted to a stable feature, a migration path will be included in the changelog entry of the release.
There is no guarantee that any environment variable feature flags that enabled the experimental feature will be supported by the stable version.
If they are supported, they may be accompanied with a deprecation notice stating a timeline for the removal of that support.

66
vendor/go.opentelemetry.io/otel/sdk/internal/x/x.go generated vendored Normal file
View File

@ -0,0 +1,66 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package x contains support for OTel SDK experimental features.
//
// This package should only be used for features defined in the specification.
// It should not be used for experiments or new project ideas.
package x // import "go.opentelemetry.io/otel/sdk/internal/x"
import (
"os"
"strings"
)
// Resource is an experimental feature flag that defines if resource detectors
// should be included experimental semantic conventions.
//
// To enable this feature set the OTEL_GO_X_RESOURCE environment variable
// to the case-insensitive string value of "true" (i.e. "True" and "TRUE"
// will also enable this).
var Resource = newFeature("RESOURCE", func(v string) (string, bool) {
if strings.ToLower(v) == "true" {
return v, true
}
return "", false
})
// Feature is an experimental feature control flag. It provides a uniform way
// to interact with these feature flags and parse their values.
type Feature[T any] struct {
key string
parse func(v string) (T, bool)
}
func newFeature[T any](suffix string, parse func(string) (T, bool)) Feature[T] {
const envKeyRoot = "OTEL_GO_X_"
return Feature[T]{
key: envKeyRoot + suffix,
parse: parse,
}
}
// Key returns the environment variable key that needs to be set to enable the
// feature.
func (f Feature[T]) Key() string { return f.key }
// Lookup returns the user configured value for the feature and true if the
// user has enabled the feature. Otherwise, if the feature is not enabled, a
// zero-value and false are returned.
func (f Feature[T]) Lookup() (v T, ok bool) {
// https://github.com/open-telemetry/opentelemetry-specification/blob/62effed618589a0bec416a87e559c0a9d96289bb/specification/configuration/sdk-environment-variables.md#parsing-empty-value
//
// > The SDK MUST interpret an empty value of an environment variable the
// > same way as when the variable is unset.
vRaw := os.Getenv(f.key)
if vRaw == "" {
return v, ok
}
return f.parse(vRaw)
}
// Enabled returns if the feature is enabled.
func (f Feature[T]) Enabled() bool {
_, ok := f.Lookup()
return ok
}

201
vendor/go.opentelemetry.io/otel/sdk/metric/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

3
vendor/go.opentelemetry.io/otel/sdk/metric/README.md generated vendored Normal file
View File

@ -0,0 +1,3 @@
# Metric SDK
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/sdk/metric)](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/metric)

View File

@ -0,0 +1,189 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"errors"
"fmt"
"slices"
)
// errAgg is wrapped by misconfigured aggregations.
var errAgg = errors.New("aggregation")
// Aggregation is the aggregation used to summarize recorded measurements.
type Aggregation interface {
// copy returns a deep copy of the Aggregation.
copy() Aggregation
// err returns an error for any misconfigured Aggregation.
err() error
}
// AggregationDrop is an Aggregation that drops all recorded data.
type AggregationDrop struct{} // AggregationDrop has no parameters.
var _ Aggregation = AggregationDrop{}
// copy returns a deep copy of d.
func (d AggregationDrop) copy() Aggregation { return d }
// err returns an error for any misconfiguration. A drop aggregation has no
// parameters and cannot be misconfigured, therefore this always returns nil.
func (AggregationDrop) err() error { return nil }
// AggregationDefault is an Aggregation that uses the default instrument kind selection
// mapping to select another Aggregation. A metric reader can be configured to
// make an aggregation selection based on instrument kind that differs from
// the default. This Aggregation ensures the default is used.
//
// See the [DefaultAggregationSelector] for information about the default
// instrument kind selection mapping.
type AggregationDefault struct{} // AggregationDefault has no parameters.
var _ Aggregation = AggregationDefault{}
// copy returns a deep copy of d.
func (d AggregationDefault) copy() Aggregation { return d }
// err returns an error for any misconfiguration. A default aggregation has no
// parameters and cannot be misconfigured, therefore this always returns nil.
func (AggregationDefault) err() error { return nil }
// AggregationSum is an Aggregation that summarizes a set of measurements as their
// arithmetic sum.
type AggregationSum struct{} // AggregationSum has no parameters.
var _ Aggregation = AggregationSum{}
// copy returns a deep copy of s.
func (s AggregationSum) copy() Aggregation { return s }
// err returns an error for any misconfiguration. A sum aggregation has no
// parameters and cannot be misconfigured, therefore this always returns nil.
func (AggregationSum) err() error { return nil }
// AggregationLastValue is an Aggregation that summarizes a set of measurements as the
// last one made.
type AggregationLastValue struct{} // AggregationLastValue has no parameters.
var _ Aggregation = AggregationLastValue{}
// copy returns a deep copy of l.
func (l AggregationLastValue) copy() Aggregation { return l }
// err returns an error for any misconfiguration. A last-value aggregation has
// no parameters and cannot be misconfigured, therefore this always returns
// nil.
func (AggregationLastValue) err() error { return nil }
// AggregationExplicitBucketHistogram is an Aggregation that summarizes a set of
// measurements as an histogram with explicitly defined buckets.
type AggregationExplicitBucketHistogram struct {
// Boundaries are the increasing bucket boundary values. Boundary values
// define bucket upper bounds. Buckets are exclusive of their lower
// boundary and inclusive of their upper bound (except at positive
// infinity). A measurement is defined to fall into the greatest-numbered
// bucket with a boundary that is greater than or equal to the
// measurement. As an example, boundaries defined as:
//
// []float64{0, 5, 10, 25, 50, 75, 100, 250, 500, 1000}
//
// Will define these buckets:
//
// (-∞, 0], (0, 5.0], (5.0, 10.0], (10.0, 25.0], (25.0, 50.0],
// (50.0, 75.0], (75.0, 100.0], (100.0, 250.0], (250.0, 500.0],
// (500.0, 1000.0], (1000.0, +∞)
Boundaries []float64
// NoMinMax indicates whether to not record the min and max of the
// distribution. By default, these extrema are recorded.
//
// Recording these extrema for cumulative data is expected to have little
// value, they will represent the entire life of the instrument instead of
// just the current collection cycle. It is recommended to set this to true
// for that type of data to avoid computing the low-value extrema.
NoMinMax bool
}
var _ Aggregation = AggregationExplicitBucketHistogram{}
// errHist is returned by misconfigured ExplicitBucketHistograms.
var errHist = fmt.Errorf("%w: explicit bucket histogram", errAgg)
// err returns an error for any misconfiguration.
func (h AggregationExplicitBucketHistogram) err() error {
if len(h.Boundaries) <= 1 {
return nil
}
// Check boundaries are monotonic.
i := h.Boundaries[0]
for _, j := range h.Boundaries[1:] {
if i >= j {
return fmt.Errorf("%w: non-monotonic boundaries: %v", errHist, h.Boundaries)
}
i = j
}
return nil
}
// copy returns a deep copy of h.
func (h AggregationExplicitBucketHistogram) copy() Aggregation {
return AggregationExplicitBucketHistogram{
Boundaries: slices.Clone(h.Boundaries),
NoMinMax: h.NoMinMax,
}
}
// AggregationBase2ExponentialHistogram is an Aggregation that summarizes a set of
// measurements as an histogram with bucket widths that grow exponentially.
type AggregationBase2ExponentialHistogram struct {
// MaxSize is the maximum number of buckets to use for the histogram.
MaxSize int32
// MaxScale is the maximum resolution scale to use for the histogram.
//
// MaxScale has a maximum value of 20. Using a value of 20 means the
// maximum number of buckets that can fit within the range of a
// signed 32-bit integer index could be used.
//
// MaxScale has a minimum value of -10. Using a value of -10 means only
// two buckets will be used.
MaxScale int32
// NoMinMax indicates whether to not record the min and max of the
// distribution. By default, these extrema are recorded.
//
// Recording these extrema for cumulative data is expected to have little
// value, they will represent the entire life of the instrument instead of
// just the current collection cycle. It is recommended to set this to true
// for that type of data to avoid computing the low-value extrema.
NoMinMax bool
}
var _ Aggregation = AggregationBase2ExponentialHistogram{}
// copy returns a deep copy of the Aggregation.
func (e AggregationBase2ExponentialHistogram) copy() Aggregation {
return e
}
const (
expoMaxScale = 20
expoMinScale = -10
)
// errExpoHist is returned by misconfigured Base2ExponentialBucketHistograms.
var errExpoHist = fmt.Errorf("%w: exponential histogram", errAgg)
// err returns an error for any misconfigured Aggregation.
func (e AggregationBase2ExponentialHistogram) err() error {
if e.MaxScale > expoMaxScale {
return fmt.Errorf("%w: max size %d is greater than maximum scale %d", errExpoHist, e.MaxSize, expoMaxScale)
}
if e.MaxSize <= 0 {
return fmt.Errorf("%w: max size %d is less than or equal to zero", errExpoHist, e.MaxSize)
}
return nil
}

83
vendor/go.opentelemetry.io/otel/sdk/metric/cache.go generated vendored Normal file
View File

@ -0,0 +1,83 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"sync"
)
// cache is a locking storage used to quickly return already computed values.
//
// The zero value of a cache is empty and ready to use.
//
// A cache must not be copied after first use.
//
// All methods of a cache are safe to call concurrently.
type cache[K comparable, V any] struct {
sync.Mutex
data map[K]V
}
// Lookup returns the value stored in the cache with the associated key if it
// exists. Otherwise, f is called and its returned value is set in the cache
// for key and returned.
//
// Lookup is safe to call concurrently. It will hold the cache lock, so f
// should not block excessively.
func (c *cache[K, V]) Lookup(key K, f func() V) V {
c.Lock()
defer c.Unlock()
if c.data == nil {
val := f()
c.data = map[K]V{key: val}
return val
}
if v, ok := c.data[key]; ok {
return v
}
val := f()
c.data[key] = val
return val
}
// HasKey returns true if Lookup has previously been called with that key
//
// HasKey is safe to call concurrently.
func (c *cache[K, V]) HasKey(key K) bool {
c.Lock()
defer c.Unlock()
_, ok := c.data[key]
return ok
}
// cacheWithErr is a locking storage used to quickly return already computed values and an error.
//
// The zero value of a cacheWithErr is empty and ready to use.
//
// A cacheWithErr must not be copied after first use.
//
// All methods of a cacheWithErr are safe to call concurrently.
type cacheWithErr[K comparable, V any] struct {
cache[K, valAndErr[V]]
}
type valAndErr[V any] struct {
val V
err error
}
// Lookup returns the value stored in the cacheWithErr with the associated key
// if it exists. Otherwise, f is called and its returned value is set in the
// cacheWithErr for key and returned.
//
// Lookup is safe to call concurrently. It will hold the cacheWithErr lock, so f
// should not block excessively.
func (c *cacheWithErr[K, V]) Lookup(key K, f func() (V, error)) (V, error) {
combined := c.cache.Lookup(key, func() valAndErr[V] {
val, err := f()
return valAndErr[V]{val: val, err: err}
})
return combined.val, combined.err
}

137
vendor/go.opentelemetry.io/otel/sdk/metric/config.go generated vendored Normal file
View File

@ -0,0 +1,137 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"context"
"fmt"
"sync"
"go.opentelemetry.io/otel/sdk/resource"
)
// config contains configuration options for a MeterProvider.
type config struct {
res *resource.Resource
readers []Reader
views []View
}
// readerSignals returns a force-flush and shutdown function for a
// MeterProvider to call in their respective options. All Readers c contains
// will have their force-flush and shutdown methods unified into returned
// single functions.
func (c config) readerSignals() (forceFlush, shutdown func(context.Context) error) {
var fFuncs, sFuncs []func(context.Context) error
for _, r := range c.readers {
sFuncs = append(sFuncs, r.Shutdown)
if f, ok := r.(interface{ ForceFlush(context.Context) error }); ok {
fFuncs = append(fFuncs, f.ForceFlush)
}
}
return unify(fFuncs), unifyShutdown(sFuncs)
}
// unify unifies calling all of funcs into a single function call. All errors
// returned from calls to funcs will be unify into a single error return
// value.
func unify(funcs []func(context.Context) error) func(context.Context) error {
return func(ctx context.Context) error {
var errs []error
for _, f := range funcs {
if err := f(ctx); err != nil {
errs = append(errs, err)
}
}
return unifyErrors(errs)
}
}
// unifyErrors combines multiple errors into a single error.
func unifyErrors(errs []error) error {
switch len(errs) {
case 0:
return nil
case 1:
return errs[0]
default:
return fmt.Errorf("%v", errs)
}
}
// unifyShutdown unifies calling all of funcs once for a shutdown. If called
// more than once, an ErrReaderShutdown error is returned.
func unifyShutdown(funcs []func(context.Context) error) func(context.Context) error {
f := unify(funcs)
var once sync.Once
return func(ctx context.Context) error {
err := ErrReaderShutdown
once.Do(func() { err = f(ctx) })
return err
}
}
// newConfig returns a config configured with options.
func newConfig(options []Option) config {
conf := config{res: resource.Default()}
for _, o := range options {
conf = o.apply(conf)
}
return conf
}
// Option applies a configuration option value to a MeterProvider.
type Option interface {
apply(config) config
}
// optionFunc applies a set of options to a config.
type optionFunc func(config) config
// apply returns a config with option(s) applied.
func (o optionFunc) apply(conf config) config {
return o(conf)
}
// WithResource associates a Resource with a MeterProvider. This Resource
// represents the entity producing telemetry and is associated with all Meters
// the MeterProvider will create.
//
// By default, if this Option is not used, the default Resource from the
// go.opentelemetry.io/otel/sdk/resource package will be used.
func WithResource(res *resource.Resource) Option {
return optionFunc(func(conf config) config {
conf.res = res
return conf
})
}
// WithReader associates Reader r with a MeterProvider.
//
// By default, if this option is not used, the MeterProvider will perform no
// operations; no data will be exported without a Reader.
func WithReader(r Reader) Option {
return optionFunc(func(cfg config) config {
if r == nil {
return cfg
}
cfg.readers = append(cfg.readers, r)
return cfg
})
}
// WithView associates views with a MeterProvider.
//
// Views are appended to existing ones in a MeterProvider if this option is
// used multiple times.
//
// By default, if this option is not used, the MeterProvider will use the
// default view.
func WithView(views ...View) Option {
return optionFunc(func(cfg config) config {
cfg.views = append(cfg.views, views...)
return cfg
})
}

39
vendor/go.opentelemetry.io/otel/sdk/metric/doc.go generated vendored Normal file
View File

@ -0,0 +1,39 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package metric provides an implementation of the OpenTelemetry metrics SDK.
//
// See https://opentelemetry.io/docs/concepts/signals/metrics/ for information
// about the concept of OpenTelemetry metrics and
// https://opentelemetry.io/docs/concepts/components/ for more information
// about OpenTelemetry SDKs.
//
// The entry point for the metric package is the MeterProvider. It is the
// object that all API calls use to create Meters, instruments, and ultimately
// make metric measurements. Also, it is an object that should be used to
// control the life-cycle (start, flush, and shutdown) of the SDK.
//
// A MeterProvider needs to be configured to export the measured data, this is
// done by configuring it with a Reader implementation (using the WithReader
// MeterProviderOption). Readers take two forms: ones that push to an endpoint
// (NewPeriodicReader), and ones that an endpoint pulls from. See
// [go.opentelemetry.io/otel/exporters] for exporters that can be used as
// or with these Readers.
//
// Each Reader, when registered with the MeterProvider, can be augmented with a
// View. Views allow users that run OpenTelemetry instrumented code to modify
// the generated data of that instrumentation.
//
// The data generated by a MeterProvider needs to include information about its
// origin. A MeterProvider needs to be configured with a Resource, using the
// WithResource MeterProviderOption, to include this information. This Resource
// should be used to describe the unique runtime environment instrumented code
// is being run on. That way when multiple instances of the code are collected
// at a single endpoint their origin is decipherable.
//
// See [go.opentelemetry.io/otel/metric] for more information about
// the metric API.
//
// See [go.opentelemetry.io/otel/sdk/metric/internal/x] for information about
// the experimental features.
package metric // import "go.opentelemetry.io/otel/sdk/metric"

39
vendor/go.opentelemetry.io/otel/sdk/metric/env.go generated vendored Normal file
View File

@ -0,0 +1,39 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"os"
"strconv"
"time"
"go.opentelemetry.io/otel/internal/global"
)
// Environment variable names.
const (
// The time interval (in milliseconds) between the start of two export attempts.
envInterval = "OTEL_METRIC_EXPORT_INTERVAL"
// Maximum allowed time (in milliseconds) to export data.
envTimeout = "OTEL_METRIC_EXPORT_TIMEOUT"
)
// envDuration returns an environment variable's value as duration in milliseconds if it is exists,
// or the defaultValue if the environment variable is not defined or the value is not valid.
func envDuration(key string, defaultValue time.Duration) time.Duration {
v := os.Getenv(key)
if v == "" {
return defaultValue
}
d, err := strconv.Atoi(v)
if err != nil {
global.Error(err, "parse duration", "environment variable", key, "value", v)
return defaultValue
}
if d <= 0 {
global.Error(errNonPositiveDuration, "non-positive duration", "environment variable", key, "value", v)
return defaultValue
}
return time.Duration(d) * time.Millisecond
}

81
vendor/go.opentelemetry.io/otel/sdk/metric/exemplar.go generated vendored Normal file
View File

@ -0,0 +1,81 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"os"
"runtime"
"slices"
"go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
"go.opentelemetry.io/otel/sdk/metric/internal/x"
)
// reservoirFunc returns the appropriately configured exemplar reservoir
// creation func based on the passed InstrumentKind and user defined
// environment variables.
//
// Note: This will only return non-nil values when the experimental exemplar
// feature is enabled and the OTEL_METRICS_EXEMPLAR_FILTER environment variable
// is not set to always_off.
func reservoirFunc[N int64 | float64](agg Aggregation) func() exemplar.FilteredReservoir[N] {
if !x.Exemplars.Enabled() {
return nil
}
// https://github.com/open-telemetry/opentelemetry-specification/blob/d4b241f451674e8f611bb589477680341006ad2b/specification/configuration/sdk-environment-variables.md#exemplar
const filterEnvKey = "OTEL_METRICS_EXEMPLAR_FILTER"
var filter exemplar.Filter
switch os.Getenv(filterEnvKey) {
case "always_on":
filter = exemplar.AlwaysOnFilter
case "always_off":
return exemplar.Drop
case "trace_based":
fallthrough
default:
filter = exemplar.SampledFilter
}
// https://github.com/open-telemetry/opentelemetry-specification/blob/d4b241f451674e8f611bb589477680341006ad2b/specification/metrics/sdk.md#exemplar-defaults
// Explicit bucket histogram aggregation with more than 1 bucket will
// use AlignedHistogramBucketExemplarReservoir.
a, ok := agg.(AggregationExplicitBucketHistogram)
if ok && len(a.Boundaries) > 0 {
cp := slices.Clone(a.Boundaries)
return func() exemplar.FilteredReservoir[N] {
bounds := cp
return exemplar.NewFilteredReservoir[N](filter, exemplar.Histogram(bounds))
}
}
var n int
if a, ok := agg.(AggregationBase2ExponentialHistogram); ok {
// Base2 Exponential Histogram Aggregation SHOULD use a
// SimpleFixedSizeExemplarReservoir with a reservoir equal to the
// smaller of the maximum number of buckets configured on the
// aggregation or twenty (e.g. min(20, max_buckets)).
n = int(a.MaxSize)
if n > 20 {
n = 20
}
} else {
// https://github.com/open-telemetry/opentelemetry-specification/blob/e94af89e3d0c01de30127a0f423e912f6cda7bed/specification/metrics/sdk.md#simplefixedsizeexemplarreservoir
// This Exemplar reservoir MAY take a configuration parameter for
// the size of the reservoir. If no size configuration is
// provided, the default size MAY be the number of possible
// concurrent threads (e.g. number of CPUs) to help reduce
// contention. Otherwise, a default size of 1 SHOULD be used.
n = runtime.NumCPU()
if n < 1 {
// Should never be the case, but be defensive.
n = 1
}
}
return func() exemplar.FilteredReservoir[N] {
return exemplar.NewFilteredReservoir[N](filter, exemplar.FixedSize(n))
}
}

77
vendor/go.opentelemetry.io/otel/sdk/metric/exporter.go generated vendored Normal file
View File

@ -0,0 +1,77 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"context"
"fmt"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
// ErrExporterShutdown is returned if Export or Shutdown are called after an
// Exporter has been Shutdown.
var ErrExporterShutdown = fmt.Errorf("exporter is shutdown")
// Exporter handles the delivery of metric data to external receivers. This is
// the final component in the metric push pipeline.
type Exporter interface {
// Temporality returns the Temporality to use for an instrument kind.
//
// This method needs to be concurrent safe with itself and all the other
// Exporter methods.
Temporality(InstrumentKind) metricdata.Temporality
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Aggregation returns the Aggregation to use for an instrument kind.
//
// This method needs to be concurrent safe with itself and all the other
// Exporter methods.
Aggregation(InstrumentKind) Aggregation
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Export serializes and transmits metric data to a receiver.
//
// This is called synchronously, there is no concurrency safety
// requirement. Because of this, it is critical that all timeouts and
// cancellations of the passed context be honored.
//
// All retry logic must be contained in this function. The SDK does not
// implement any retry logic. All errors returned by this function are
// considered unrecoverable and will be reported to a configured error
// Handler.
//
// The passed ResourceMetrics may be reused when the call completes. If an
// exporter needs to hold this data after it returns, it needs to make a
// copy.
Export(context.Context, *metricdata.ResourceMetrics) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// ForceFlush flushes any metric data held by an exporter.
//
// The deadline or cancellation of the passed context must be honored. An
// appropriate error should be returned in these situations.
//
// This method needs to be concurrent safe.
ForceFlush(context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Shutdown flushes all metric data held by an exporter and releases any
// held computational resources.
//
// The deadline or cancellation of the passed context must be honored. An
// appropriate error should be returned in these situations.
//
// After Shutdown is called, calls to Export will perform no operation and
// instead will return an error indicating the shutdown state.
//
// This method needs to be concurrent safe.
Shutdown(context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}

View File

@ -0,0 +1,347 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:generate stringer -type=InstrumentKind -trimprefix=InstrumentKind
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"context"
"errors"
"fmt"
"strings"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/metric/embedded"
"go.opentelemetry.io/otel/sdk/instrumentation"
"go.opentelemetry.io/otel/sdk/metric/internal/aggregate"
)
var zeroScope instrumentation.Scope
// InstrumentKind is the identifier of a group of instruments that all
// performing the same function.
type InstrumentKind uint8
const (
// instrumentKindUndefined is an undefined instrument kind, it should not
// be used by any initialized type.
instrumentKindUndefined InstrumentKind = 0 // nolint:deadcode,varcheck,unused
// InstrumentKindCounter identifies a group of instruments that record
// increasing values synchronously with the code path they are measuring.
InstrumentKindCounter InstrumentKind = 1
// InstrumentKindUpDownCounter identifies a group of instruments that
// record increasing and decreasing values synchronously with the code path
// they are measuring.
InstrumentKindUpDownCounter InstrumentKind = 2
// InstrumentKindHistogram identifies a group of instruments that record a
// distribution of values synchronously with the code path they are
// measuring.
InstrumentKindHistogram InstrumentKind = 3
// InstrumentKindObservableCounter identifies a group of instruments that
// record increasing values in an asynchronous callback.
InstrumentKindObservableCounter InstrumentKind = 4
// InstrumentKindObservableUpDownCounter identifies a group of instruments
// that record increasing and decreasing values in an asynchronous
// callback.
InstrumentKindObservableUpDownCounter InstrumentKind = 5
// InstrumentKindObservableGauge identifies a group of instruments that
// record current values in an asynchronous callback.
InstrumentKindObservableGauge InstrumentKind = 6
// InstrumentKindGauge identifies a group of instruments that record
// instantaneous values synchronously with the code path they are
// measuring.
InstrumentKindGauge InstrumentKind = 7
)
type nonComparable [0]func() // nolint: unused // This is indeed used.
// Instrument describes properties an instrument is created with.
type Instrument struct {
// Name is the human-readable identifier of the instrument.
Name string
// Description describes the purpose of the instrument.
Description string
// Kind defines the functional group of the instrument.
Kind InstrumentKind
// Unit is the unit of measurement recorded by the instrument.
Unit string
// Scope identifies the instrumentation that created the instrument.
Scope instrumentation.Scope
// Ensure forward compatibility if non-comparable fields need to be added.
nonComparable // nolint: unused
}
// IsEmpty returns if all Instrument fields are their zero-value.
func (i Instrument) IsEmpty() bool {
return i.Name == "" &&
i.Description == "" &&
i.Kind == instrumentKindUndefined &&
i.Unit == "" &&
i.Scope == zeroScope
}
// matches returns whether all the non-zero-value fields of i match the
// corresponding fields of other. If i is empty it will match all other, and
// true will always be returned.
func (i Instrument) matches(other Instrument) bool {
return i.matchesName(other) &&
i.matchesDescription(other) &&
i.matchesKind(other) &&
i.matchesUnit(other) &&
i.matchesScope(other)
}
// matchesName returns true if the Name of i is "" or it equals the Name of
// other, otherwise false.
func (i Instrument) matchesName(other Instrument) bool {
return i.Name == "" || i.Name == other.Name
}
// matchesDescription returns true if the Description of i is "" or it equals
// the Description of other, otherwise false.
func (i Instrument) matchesDescription(other Instrument) bool {
return i.Description == "" || i.Description == other.Description
}
// matchesKind returns true if the Kind of i is its zero-value or it equals the
// Kind of other, otherwise false.
func (i Instrument) matchesKind(other Instrument) bool {
return i.Kind == instrumentKindUndefined || i.Kind == other.Kind
}
// matchesUnit returns true if the Unit of i is its zero-value or it equals the
// Unit of other, otherwise false.
func (i Instrument) matchesUnit(other Instrument) bool {
return i.Unit == "" || i.Unit == other.Unit
}
// matchesScope returns true if the Scope of i is its zero-value or it equals
// the Scope of other, otherwise false.
func (i Instrument) matchesScope(other Instrument) bool {
return (i.Scope.Name == "" || i.Scope.Name == other.Scope.Name) &&
(i.Scope.Version == "" || i.Scope.Version == other.Scope.Version) &&
(i.Scope.SchemaURL == "" || i.Scope.SchemaURL == other.Scope.SchemaURL)
}
// Stream describes the stream of data an instrument produces.
type Stream struct {
// Name is the human-readable identifier of the stream.
Name string
// Description describes the purpose of the data.
Description string
// Unit is the unit of measurement recorded.
Unit string
// Aggregation the stream uses for an instrument.
Aggregation Aggregation
// AttributeFilter is an attribute Filter applied to the attributes
// recorded for an instrument's measurement. If the filter returns false
// the attribute will not be recorded, otherwise, if it returns true, it
// will record the attribute.
//
// Use NewAllowKeysFilter from "go.opentelemetry.io/otel/attribute" to
// provide an allow-list of attribute keys here.
AttributeFilter attribute.Filter
}
// instID are the identifying properties of a instrument.
type instID struct {
// Name is the name of the stream.
Name string
// Description is the description of the stream.
Description string
// Kind defines the functional group of the instrument.
Kind InstrumentKind
// Unit is the unit of the stream.
Unit string
// Number is the number type of the stream.
Number string
}
// Returns a normalized copy of the instID i.
//
// Instrument names are considered case-insensitive. Standardize the instrument
// name to always be lowercase for the returned instID so it can be compared
// without the name casing affecting the comparison.
func (i instID) normalize() instID {
i.Name = strings.ToLower(i.Name)
return i
}
type int64Inst struct {
measures []aggregate.Measure[int64]
embedded.Int64Counter
embedded.Int64UpDownCounter
embedded.Int64Histogram
embedded.Int64Gauge
}
var (
_ metric.Int64Counter = (*int64Inst)(nil)
_ metric.Int64UpDownCounter = (*int64Inst)(nil)
_ metric.Int64Histogram = (*int64Inst)(nil)
_ metric.Int64Gauge = (*int64Inst)(nil)
)
func (i *int64Inst) Add(ctx context.Context, val int64, opts ...metric.AddOption) {
c := metric.NewAddConfig(opts)
i.aggregate(ctx, val, c.Attributes())
}
func (i *int64Inst) Record(ctx context.Context, val int64, opts ...metric.RecordOption) {
c := metric.NewRecordConfig(opts)
i.aggregate(ctx, val, c.Attributes())
}
func (i *int64Inst) aggregate(ctx context.Context, val int64, s attribute.Set) { // nolint:revive // okay to shadow pkg with method.
for _, in := range i.measures {
in(ctx, val, s)
}
}
type float64Inst struct {
measures []aggregate.Measure[float64]
embedded.Float64Counter
embedded.Float64UpDownCounter
embedded.Float64Histogram
embedded.Float64Gauge
}
var (
_ metric.Float64Counter = (*float64Inst)(nil)
_ metric.Float64UpDownCounter = (*float64Inst)(nil)
_ metric.Float64Histogram = (*float64Inst)(nil)
_ metric.Float64Gauge = (*float64Inst)(nil)
)
func (i *float64Inst) Add(ctx context.Context, val float64, opts ...metric.AddOption) {
c := metric.NewAddConfig(opts)
i.aggregate(ctx, val, c.Attributes())
}
func (i *float64Inst) Record(ctx context.Context, val float64, opts ...metric.RecordOption) {
c := metric.NewRecordConfig(opts)
i.aggregate(ctx, val, c.Attributes())
}
func (i *float64Inst) aggregate(ctx context.Context, val float64, s attribute.Set) {
for _, in := range i.measures {
in(ctx, val, s)
}
}
// observablID is a comparable unique identifier of an observable.
type observablID[N int64 | float64] struct {
name string
description string
kind InstrumentKind
unit string
scope instrumentation.Scope
}
type float64Observable struct {
metric.Float64Observable
*observable[float64]
embedded.Float64ObservableCounter
embedded.Float64ObservableUpDownCounter
embedded.Float64ObservableGauge
}
var (
_ metric.Float64ObservableCounter = float64Observable{}
_ metric.Float64ObservableUpDownCounter = float64Observable{}
_ metric.Float64ObservableGauge = float64Observable{}
)
func newFloat64Observable(m *meter, kind InstrumentKind, name, desc, u string) float64Observable {
return float64Observable{
observable: newObservable[float64](m, kind, name, desc, u),
}
}
type int64Observable struct {
metric.Int64Observable
*observable[int64]
embedded.Int64ObservableCounter
embedded.Int64ObservableUpDownCounter
embedded.Int64ObservableGauge
}
var (
_ metric.Int64ObservableCounter = int64Observable{}
_ metric.Int64ObservableUpDownCounter = int64Observable{}
_ metric.Int64ObservableGauge = int64Observable{}
)
func newInt64Observable(m *meter, kind InstrumentKind, name, desc, u string) int64Observable {
return int64Observable{
observable: newObservable[int64](m, kind, name, desc, u),
}
}
type observable[N int64 | float64] struct {
metric.Observable
observablID[N]
meter *meter
measures measures[N]
dropAggregation bool
}
func newObservable[N int64 | float64](m *meter, kind InstrumentKind, name, desc, u string) *observable[N] {
return &observable[N]{
observablID: observablID[N]{
name: name,
description: desc,
kind: kind,
unit: u,
scope: m.scope,
},
meter: m,
}
}
// observe records the val for the set of attrs.
func (o *observable[N]) observe(val N, s attribute.Set) {
o.measures.observe(val, s)
}
func (o *observable[N]) appendMeasures(meas []aggregate.Measure[N]) {
o.measures = append(o.measures, meas...)
}
type measures[N int64 | float64] []aggregate.Measure[N]
// observe records the val for the set of attrs.
func (m measures[N]) observe(val N, s attribute.Set) {
for _, in := range m {
in(context.Background(), val, s)
}
}
var errEmptyAgg = errors.New("no aggregators for observable instrument")
// registerable returns an error if the observable o should not be registered,
// and nil if it should. An errEmptyAgg error is returned if o is effectively a
// no-op because it does not have any aggregators. Also, an error is returned
// if scope defines a Meter other than the one o was created by.
func (o *observable[N]) registerable(m *meter) error {
if len(o.measures) == 0 {
return errEmptyAgg
}
if m != o.meter {
return fmt.Errorf(
"invalid registration: observable %q from Meter %q, registered with Meter %q",
o.name,
o.scope.Name,
m.scope.Name,
)
}
return nil
}

View File

@ -0,0 +1,30 @@
// Code generated by "stringer -type=InstrumentKind -trimprefix=InstrumentKind"; DO NOT EDIT.
package metric
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[instrumentKindUndefined-0]
_ = x[InstrumentKindCounter-1]
_ = x[InstrumentKindUpDownCounter-2]
_ = x[InstrumentKindHistogram-3]
_ = x[InstrumentKindObservableCounter-4]
_ = x[InstrumentKindObservableUpDownCounter-5]
_ = x[InstrumentKindObservableGauge-6]
_ = x[InstrumentKindGauge-7]
}
const _InstrumentKind_name = "instrumentKindUndefinedCounterUpDownCounterHistogramObservableCounterObservableUpDownCounterObservableGaugeGauge"
var _InstrumentKind_index = [...]uint8{0, 23, 30, 43, 52, 69, 92, 107, 112}
func (i InstrumentKind) String() string {
if i >= InstrumentKind(len(_InstrumentKind_index)-1) {
return "InstrumentKind(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _InstrumentKind_name[_InstrumentKind_index[i]:_InstrumentKind_index[i+1]]
}

View File

@ -0,0 +1,154 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package aggregate // import "go.opentelemetry.io/otel/sdk/metric/internal/aggregate"
import (
"context"
"time"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
// now is used to return the current local time while allowing tests to
// override the default time.Now function.
var now = time.Now
// Measure receives measurements to be aggregated.
type Measure[N int64 | float64] func(context.Context, N, attribute.Set)
// ComputeAggregation stores the aggregate of measurements into dest and
// returns the number of aggregate data-points output.
type ComputeAggregation func(dest *metricdata.Aggregation) int
// Builder builds an aggregate function.
type Builder[N int64 | float64] struct {
// Temporality is the temporality used for the returned aggregate function.
//
// If this is not provided a default of cumulative will be used (except for
// the last-value aggregate function where delta is the only appropriate
// temporality).
Temporality metricdata.Temporality
// Filter is the attribute filter the aggregate function will use on the
// input of measurements.
Filter attribute.Filter
// ReservoirFunc is the factory function used by aggregate functions to
// create new exemplar reservoirs for a new seen attribute set.
//
// If this is not provided a default factory function that returns an
// exemplar.Drop reservoir will be used.
ReservoirFunc func() exemplar.FilteredReservoir[N]
// AggregationLimit is the cardinality limit of measurement attributes. Any
// measurement for new attributes once the limit has been reached will be
// aggregated into a single aggregate for the "otel.metric.overflow"
// attribute.
//
// If AggregationLimit is less than or equal to zero there will not be an
// aggregation limit imposed (i.e. unlimited attribute sets).
AggregationLimit int
}
func (b Builder[N]) resFunc() func() exemplar.FilteredReservoir[N] {
if b.ReservoirFunc != nil {
return b.ReservoirFunc
}
return exemplar.Drop
}
type fltrMeasure[N int64 | float64] func(ctx context.Context, value N, fltrAttr attribute.Set, droppedAttr []attribute.KeyValue)
func (b Builder[N]) filter(f fltrMeasure[N]) Measure[N] {
if b.Filter != nil {
fltr := b.Filter // Copy to make it immutable after assignment.
return func(ctx context.Context, n N, a attribute.Set) {
fAttr, dropped := a.Filter(fltr)
f(ctx, n, fAttr, dropped)
}
}
return func(ctx context.Context, n N, a attribute.Set) {
f(ctx, n, a, nil)
}
}
// LastValue returns a last-value aggregate function input and output.
func (b Builder[N]) LastValue() (Measure[N], ComputeAggregation) {
lv := newLastValue[N](b.AggregationLimit, b.resFunc())
switch b.Temporality {
case metricdata.DeltaTemporality:
return b.filter(lv.measure), lv.delta
default:
return b.filter(lv.measure), lv.cumulative
}
}
// PrecomputedLastValue returns a last-value aggregate function input and
// output. The aggregation returned from the returned ComputeAggregation
// function will always only return values from the previous collection cycle.
func (b Builder[N]) PrecomputedLastValue() (Measure[N], ComputeAggregation) {
lv := newPrecomputedLastValue[N](b.AggregationLimit, b.resFunc())
switch b.Temporality {
case metricdata.DeltaTemporality:
return b.filter(lv.measure), lv.delta
default:
return b.filter(lv.measure), lv.cumulative
}
}
// PrecomputedSum returns a sum aggregate function input and output. The
// arguments passed to the input are expected to be the precomputed sum values.
func (b Builder[N]) PrecomputedSum(monotonic bool) (Measure[N], ComputeAggregation) {
s := newPrecomputedSum[N](monotonic, b.AggregationLimit, b.resFunc())
switch b.Temporality {
case metricdata.DeltaTemporality:
return b.filter(s.measure), s.delta
default:
return b.filter(s.measure), s.cumulative
}
}
// Sum returns a sum aggregate function input and output.
func (b Builder[N]) Sum(monotonic bool) (Measure[N], ComputeAggregation) {
s := newSum[N](monotonic, b.AggregationLimit, b.resFunc())
switch b.Temporality {
case metricdata.DeltaTemporality:
return b.filter(s.measure), s.delta
default:
return b.filter(s.measure), s.cumulative
}
}
// ExplicitBucketHistogram returns a histogram aggregate function input and
// output.
func (b Builder[N]) ExplicitBucketHistogram(boundaries []float64, noMinMax, noSum bool) (Measure[N], ComputeAggregation) {
h := newHistogram[N](boundaries, noMinMax, noSum, b.AggregationLimit, b.resFunc())
switch b.Temporality {
case metricdata.DeltaTemporality:
return b.filter(h.measure), h.delta
default:
return b.filter(h.measure), h.cumulative
}
}
// ExponentialBucketHistogram returns a histogram aggregate function input and
// output.
func (b Builder[N]) ExponentialBucketHistogram(maxSize, maxScale int32, noMinMax, noSum bool) (Measure[N], ComputeAggregation) {
h := newExponentialHistogram[N](maxSize, maxScale, noMinMax, noSum, b.AggregationLimit, b.resFunc())
switch b.Temporality {
case metricdata.DeltaTemporality:
return b.filter(h.measure), h.delta
default:
return b.filter(h.measure), h.cumulative
}
}
// reset ensures s has capacity and sets it length. If the capacity of s too
// small, a new slice is returned with the specified capacity and length.
func reset[T any](s []T, length, capacity int) []T {
if cap(s) < capacity {
return make([]T, length, capacity)
}
return s[:length]
}

View File

@ -0,0 +1,7 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package aggregate provides aggregate types used compute aggregations and
// cycle the state of metric measurements made by the SDK. These types and
// functionality are meant only for internal SDK use.
package aggregate // import "go.opentelemetry.io/otel/sdk/metric/internal/aggregate"

View File

@ -0,0 +1,42 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package aggregate // import "go.opentelemetry.io/otel/sdk/metric/internal/aggregate"
import (
"sync"
"go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
var exemplarPool = sync.Pool{
New: func() any { return new([]exemplar.Exemplar) },
}
func collectExemplars[N int64 | float64](out *[]metricdata.Exemplar[N], f func(*[]exemplar.Exemplar)) {
dest := exemplarPool.Get().(*[]exemplar.Exemplar)
defer func() {
*dest = (*dest)[:0]
exemplarPool.Put(dest)
}()
*dest = reset(*dest, len(*out), cap(*out))
f(dest)
*out = reset(*out, len(*dest), cap(*dest))
for i, e := range *dest {
(*out)[i].FilteredAttributes = e.FilteredAttributes
(*out)[i].Time = e.Time
(*out)[i].SpanID = e.SpanID
(*out)[i].TraceID = e.TraceID
switch e.Value.Type() {
case exemplar.Int64ValueType:
(*out)[i].Value = N(e.Value.Int64())
case exemplar.Float64ValueType:
(*out)[i].Value = N(e.Value.Float64())
}
}
}

View File

@ -0,0 +1,442 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package aggregate // import "go.opentelemetry.io/otel/sdk/metric/internal/aggregate"
import (
"context"
"errors"
"math"
"sync"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
const (
expoMaxScale = 20
expoMinScale = -10
smallestNonZeroNormalFloat64 = 0x1p-1022
// These redefine the Math constants with a type, so the compiler won't coerce
// them into an int on 32 bit platforms.
maxInt64 int64 = math.MaxInt64
minInt64 int64 = math.MinInt64
)
// expoHistogramDataPoint is a single data point in an exponential histogram.
type expoHistogramDataPoint[N int64 | float64] struct {
attrs attribute.Set
res exemplar.FilteredReservoir[N]
count uint64
min N
max N
sum N
maxSize int
noMinMax bool
noSum bool
scale int
posBuckets expoBuckets
negBuckets expoBuckets
zeroCount uint64
}
func newExpoHistogramDataPoint[N int64 | float64](attrs attribute.Set, maxSize, maxScale int, noMinMax, noSum bool) *expoHistogramDataPoint[N] {
f := math.MaxFloat64
max := N(f) // if N is int64, max will overflow to -9223372036854775808
min := N(-f)
if N(maxInt64) > N(f) {
max = N(maxInt64)
min = N(minInt64)
}
return &expoHistogramDataPoint[N]{
attrs: attrs,
min: max,
max: min,
maxSize: maxSize,
noMinMax: noMinMax,
noSum: noSum,
scale: maxScale,
}
}
// record adds a new measurement to the histogram. It will rescale the buckets if needed.
func (p *expoHistogramDataPoint[N]) record(v N) {
p.count++
if !p.noMinMax {
if v < p.min {
p.min = v
}
if v > p.max {
p.max = v
}
}
if !p.noSum {
p.sum += v
}
absV := math.Abs(float64(v))
if float64(absV) == 0.0 {
p.zeroCount++
return
}
bin := p.getBin(absV)
bucket := &p.posBuckets
if v < 0 {
bucket = &p.negBuckets
}
// If the new bin would make the counts larger than maxScale, we need to
// downscale current measurements.
if scaleDelta := p.scaleChange(bin, bucket.startBin, len(bucket.counts)); scaleDelta > 0 {
if p.scale-scaleDelta < expoMinScale {
// With a scale of -10 there is only two buckets for the whole range of float64 values.
// This can only happen if there is a max size of 1.
otel.Handle(errors.New("exponential histogram scale underflow"))
return
}
// Downscale
p.scale -= scaleDelta
p.posBuckets.downscale(scaleDelta)
p.negBuckets.downscale(scaleDelta)
bin = p.getBin(absV)
}
bucket.record(bin)
}
// getBin returns the bin v should be recorded into.
func (p *expoHistogramDataPoint[N]) getBin(v float64) int {
frac, exp := math.Frexp(v)
if p.scale <= 0 {
// Because of the choice of fraction is always 1 power of two higher than we want.
correction := 1
if frac == .5 {
// If v is an exact power of two the frac will be .5 and the exp
// will be one higher than we want.
correction = 2
}
return (exp - correction) >> (-p.scale)
}
return exp<<p.scale + int(math.Log(frac)*scaleFactors[p.scale]) - 1
}
// scaleFactors are constants used in calculating the logarithm index. They are
// equivalent to 2^index/log(2).
var scaleFactors = [21]float64{
math.Ldexp(math.Log2E, 0),
math.Ldexp(math.Log2E, 1),
math.Ldexp(math.Log2E, 2),
math.Ldexp(math.Log2E, 3),
math.Ldexp(math.Log2E, 4),
math.Ldexp(math.Log2E, 5),
math.Ldexp(math.Log2E, 6),
math.Ldexp(math.Log2E, 7),
math.Ldexp(math.Log2E, 8),
math.Ldexp(math.Log2E, 9),
math.Ldexp(math.Log2E, 10),
math.Ldexp(math.Log2E, 11),
math.Ldexp(math.Log2E, 12),
math.Ldexp(math.Log2E, 13),
math.Ldexp(math.Log2E, 14),
math.Ldexp(math.Log2E, 15),
math.Ldexp(math.Log2E, 16),
math.Ldexp(math.Log2E, 17),
math.Ldexp(math.Log2E, 18),
math.Ldexp(math.Log2E, 19),
math.Ldexp(math.Log2E, 20),
}
// scaleChange returns the magnitude of the scale change needed to fit bin in
// the bucket. If no scale change is needed 0 is returned.
func (p *expoHistogramDataPoint[N]) scaleChange(bin, startBin, length int) int {
if length == 0 {
// No need to rescale if there are no buckets.
return 0
}
low := startBin
high := bin
if startBin >= bin {
low = bin
high = startBin + length - 1
}
count := 0
for high-low >= p.maxSize {
low = low >> 1
high = high >> 1
count++
if count > expoMaxScale-expoMinScale {
return count
}
}
return count
}
// expoBuckets is a set of buckets in an exponential histogram.
type expoBuckets struct {
startBin int
counts []uint64
}
// record increments the count for the given bin, and expands the buckets if needed.
// Size changes must be done before calling this function.
func (b *expoBuckets) record(bin int) {
if len(b.counts) == 0 {
b.counts = []uint64{1}
b.startBin = bin
return
}
endBin := b.startBin + len(b.counts) - 1
// if the new bin is inside the current range
if bin >= b.startBin && bin <= endBin {
b.counts[bin-b.startBin]++
return
}
// if the new bin is before the current start add spaces to the counts
if bin < b.startBin {
origLen := len(b.counts)
newLength := endBin - bin + 1
shift := b.startBin - bin
if newLength > cap(b.counts) {
b.counts = append(b.counts, make([]uint64, newLength-len(b.counts))...)
}
copy(b.counts[shift:origLen+shift], b.counts[:])
b.counts = b.counts[:newLength]
for i := 1; i < shift; i++ {
b.counts[i] = 0
}
b.startBin = bin
b.counts[0] = 1
return
}
// if the new is after the end add spaces to the end
if bin > endBin {
if bin-b.startBin < cap(b.counts) {
b.counts = b.counts[:bin-b.startBin+1]
for i := endBin + 1 - b.startBin; i < len(b.counts); i++ {
b.counts[i] = 0
}
b.counts[bin-b.startBin] = 1
return
}
end := make([]uint64, bin-b.startBin-len(b.counts)+1)
b.counts = append(b.counts, end...)
b.counts[bin-b.startBin] = 1
}
}
// downscale shrinks a bucket by a factor of 2*s. It will sum counts into the
// correct lower resolution bucket.
func (b *expoBuckets) downscale(delta int) {
// Example
// delta = 2
// Original offset: -6
// Counts: [ 3, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
// bins: -6 -5, -4, -3, -2, -1, 0, 1, 2, 3, 4
// new bins:-2, -2, -1, -1, -1, -1, 0, 0, 0, 0, 1
// new Offset: -2
// new Counts: [4, 14, 30, 10]
if len(b.counts) <= 1 || delta < 1 {
b.startBin = b.startBin >> delta
return
}
steps := 1 << delta
offset := b.startBin % steps
offset = (offset + steps) % steps // to make offset positive
for i := 1; i < len(b.counts); i++ {
idx := i + offset
if idx%steps == 0 {
b.counts[idx/steps] = b.counts[i]
continue
}
b.counts[idx/steps] += b.counts[i]
}
lastIdx := (len(b.counts) - 1 + offset) / steps
b.counts = b.counts[:lastIdx+1]
b.startBin = b.startBin >> delta
}
// newExponentialHistogram returns an Aggregator that summarizes a set of
// measurements as an exponential histogram. Each histogram is scoped by attributes
// and the aggregation cycle the measurements were made in.
func newExponentialHistogram[N int64 | float64](maxSize, maxScale int32, noMinMax, noSum bool, limit int, r func() exemplar.FilteredReservoir[N]) *expoHistogram[N] {
return &expoHistogram[N]{
noSum: noSum,
noMinMax: noMinMax,
maxSize: int(maxSize),
maxScale: int(maxScale),
newRes: r,
limit: newLimiter[*expoHistogramDataPoint[N]](limit),
values: make(map[attribute.Distinct]*expoHistogramDataPoint[N]),
start: now(),
}
}
// expoHistogram summarizes a set of measurements as an histogram with exponentially
// defined buckets.
type expoHistogram[N int64 | float64] struct {
noSum bool
noMinMax bool
maxSize int
maxScale int
newRes func() exemplar.FilteredReservoir[N]
limit limiter[*expoHistogramDataPoint[N]]
values map[attribute.Distinct]*expoHistogramDataPoint[N]
valuesMu sync.Mutex
start time.Time
}
func (e *expoHistogram[N]) measure(ctx context.Context, value N, fltrAttr attribute.Set, droppedAttr []attribute.KeyValue) {
// Ignore NaN and infinity.
if math.IsInf(float64(value), 0) || math.IsNaN(float64(value)) {
return
}
e.valuesMu.Lock()
defer e.valuesMu.Unlock()
attr := e.limit.Attributes(fltrAttr, e.values)
v, ok := e.values[attr.Equivalent()]
if !ok {
v = newExpoHistogramDataPoint[N](attr, e.maxSize, e.maxScale, e.noMinMax, e.noSum)
v.res = e.newRes()
e.values[attr.Equivalent()] = v
}
v.record(value)
v.res.Offer(ctx, value, droppedAttr)
}
func (e *expoHistogram[N]) delta(dest *metricdata.Aggregation) int {
t := now()
// If *dest is not a metricdata.ExponentialHistogram, memory reuse is missed.
// In that case, use the zero-value h and hope for better alignment next cycle.
h, _ := (*dest).(metricdata.ExponentialHistogram[N])
h.Temporality = metricdata.DeltaTemporality
e.valuesMu.Lock()
defer e.valuesMu.Unlock()
n := len(e.values)
hDPts := reset(h.DataPoints, n, n)
var i int
for _, val := range e.values {
hDPts[i].Attributes = val.attrs
hDPts[i].StartTime = e.start
hDPts[i].Time = t
hDPts[i].Count = val.count
hDPts[i].Scale = int32(val.scale)
hDPts[i].ZeroCount = val.zeroCount
hDPts[i].ZeroThreshold = 0.0
hDPts[i].PositiveBucket.Offset = int32(val.posBuckets.startBin)
hDPts[i].PositiveBucket.Counts = reset(hDPts[i].PositiveBucket.Counts, len(val.posBuckets.counts), len(val.posBuckets.counts))
copy(hDPts[i].PositiveBucket.Counts, val.posBuckets.counts)
hDPts[i].NegativeBucket.Offset = int32(val.negBuckets.startBin)
hDPts[i].NegativeBucket.Counts = reset(hDPts[i].NegativeBucket.Counts, len(val.negBuckets.counts), len(val.negBuckets.counts))
copy(hDPts[i].NegativeBucket.Counts, val.negBuckets.counts)
if !e.noSum {
hDPts[i].Sum = val.sum
}
if !e.noMinMax {
hDPts[i].Min = metricdata.NewExtrema(val.min)
hDPts[i].Max = metricdata.NewExtrema(val.max)
}
collectExemplars(&hDPts[i].Exemplars, val.res.Collect)
i++
}
// Unused attribute sets do not report.
clear(e.values)
e.start = t
h.DataPoints = hDPts
*dest = h
return n
}
func (e *expoHistogram[N]) cumulative(dest *metricdata.Aggregation) int {
t := now()
// If *dest is not a metricdata.ExponentialHistogram, memory reuse is missed.
// In that case, use the zero-value h and hope for better alignment next cycle.
h, _ := (*dest).(metricdata.ExponentialHistogram[N])
h.Temporality = metricdata.CumulativeTemporality
e.valuesMu.Lock()
defer e.valuesMu.Unlock()
n := len(e.values)
hDPts := reset(h.DataPoints, n, n)
var i int
for _, val := range e.values {
hDPts[i].Attributes = val.attrs
hDPts[i].StartTime = e.start
hDPts[i].Time = t
hDPts[i].Count = val.count
hDPts[i].Scale = int32(val.scale)
hDPts[i].ZeroCount = val.zeroCount
hDPts[i].ZeroThreshold = 0.0
hDPts[i].PositiveBucket.Offset = int32(val.posBuckets.startBin)
hDPts[i].PositiveBucket.Counts = reset(hDPts[i].PositiveBucket.Counts, len(val.posBuckets.counts), len(val.posBuckets.counts))
copy(hDPts[i].PositiveBucket.Counts, val.posBuckets.counts)
hDPts[i].NegativeBucket.Offset = int32(val.negBuckets.startBin)
hDPts[i].NegativeBucket.Counts = reset(hDPts[i].NegativeBucket.Counts, len(val.negBuckets.counts), len(val.negBuckets.counts))
copy(hDPts[i].NegativeBucket.Counts, val.negBuckets.counts)
if !e.noSum {
hDPts[i].Sum = val.sum
}
if !e.noMinMax {
hDPts[i].Min = metricdata.NewExtrema(val.min)
hDPts[i].Max = metricdata.NewExtrema(val.max)
}
collectExemplars(&hDPts[i].Exemplars, val.res.Collect)
i++
// TODO (#3006): This will use an unbounded amount of memory if there
// are unbounded number of attribute sets being aggregated. Attribute
// sets that become "stale" need to be forgotten so this will not
// overload the system.
}
h.DataPoints = hDPts
*dest = h
return n
}

View File

@ -0,0 +1,233 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package aggregate // import "go.opentelemetry.io/otel/sdk/metric/internal/aggregate"
import (
"context"
"slices"
"sort"
"sync"
"time"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
type buckets[N int64 | float64] struct {
attrs attribute.Set
res exemplar.FilteredReservoir[N]
counts []uint64
count uint64
total N
min, max N
}
// newBuckets returns buckets with n bins.
func newBuckets[N int64 | float64](attrs attribute.Set, n int) *buckets[N] {
return &buckets[N]{attrs: attrs, counts: make([]uint64, n)}
}
func (b *buckets[N]) sum(value N) { b.total += value }
func (b *buckets[N]) bin(idx int, value N) {
b.counts[idx]++
b.count++
if value < b.min {
b.min = value
} else if value > b.max {
b.max = value
}
}
// histValues summarizes a set of measurements as an histValues with
// explicitly defined buckets.
type histValues[N int64 | float64] struct {
noSum bool
bounds []float64
newRes func() exemplar.FilteredReservoir[N]
limit limiter[*buckets[N]]
values map[attribute.Distinct]*buckets[N]
valuesMu sync.Mutex
}
func newHistValues[N int64 | float64](bounds []float64, noSum bool, limit int, r func() exemplar.FilteredReservoir[N]) *histValues[N] {
// The responsibility of keeping all buckets correctly associated with the
// passed boundaries is ultimately this type's responsibility. Make a copy
// here so we can always guarantee this. Or, in the case of failure, have
// complete control over the fix.
b := slices.Clone(bounds)
slices.Sort(b)
return &histValues[N]{
noSum: noSum,
bounds: b,
newRes: r,
limit: newLimiter[*buckets[N]](limit),
values: make(map[attribute.Distinct]*buckets[N]),
}
}
// Aggregate records the measurement value, scoped by attr, and aggregates it
// into a histogram.
func (s *histValues[N]) measure(ctx context.Context, value N, fltrAttr attribute.Set, droppedAttr []attribute.KeyValue) {
// This search will return an index in the range [0, len(s.bounds)], where
// it will return len(s.bounds) if value is greater than the last element
// of s.bounds. This aligns with the buckets in that the length of buckets
// is len(s.bounds)+1, with the last bucket representing:
// (s.bounds[len(s.bounds)-1], +∞).
idx := sort.SearchFloat64s(s.bounds, float64(value))
s.valuesMu.Lock()
defer s.valuesMu.Unlock()
attr := s.limit.Attributes(fltrAttr, s.values)
b, ok := s.values[attr.Equivalent()]
if !ok {
// N+1 buckets. For example:
//
// bounds = [0, 5, 10]
//
// Then,
//
// buckets = (-∞, 0], (0, 5.0], (5.0, 10.0], (10.0, +∞)
b = newBuckets[N](attr, len(s.bounds)+1)
b.res = s.newRes()
// Ensure min and max are recorded values (not zero), for new buckets.
b.min, b.max = value, value
s.values[attr.Equivalent()] = b
}
b.bin(idx, value)
if !s.noSum {
b.sum(value)
}
b.res.Offer(ctx, value, droppedAttr)
}
// newHistogram returns an Aggregator that summarizes a set of measurements as
// an histogram.
func newHistogram[N int64 | float64](boundaries []float64, noMinMax, noSum bool, limit int, r func() exemplar.FilteredReservoir[N]) *histogram[N] {
return &histogram[N]{
histValues: newHistValues[N](boundaries, noSum, limit, r),
noMinMax: noMinMax,
start: now(),
}
}
// histogram summarizes a set of measurements as an histogram with explicitly
// defined buckets.
type histogram[N int64 | float64] struct {
*histValues[N]
noMinMax bool
start time.Time
}
func (s *histogram[N]) delta(dest *metricdata.Aggregation) int {
t := now()
// If *dest is not a metricdata.Histogram, memory reuse is missed. In that
// case, use the zero-value h and hope for better alignment next cycle.
h, _ := (*dest).(metricdata.Histogram[N])
h.Temporality = metricdata.DeltaTemporality
s.valuesMu.Lock()
defer s.valuesMu.Unlock()
// Do not allow modification of our copy of bounds.
bounds := slices.Clone(s.bounds)
n := len(s.values)
hDPts := reset(h.DataPoints, n, n)
var i int
for _, val := range s.values {
hDPts[i].Attributes = val.attrs
hDPts[i].StartTime = s.start
hDPts[i].Time = t
hDPts[i].Count = val.count
hDPts[i].Bounds = bounds
hDPts[i].BucketCounts = val.counts
if !s.noSum {
hDPts[i].Sum = val.total
}
if !s.noMinMax {
hDPts[i].Min = metricdata.NewExtrema(val.min)
hDPts[i].Max = metricdata.NewExtrema(val.max)
}
collectExemplars(&hDPts[i].Exemplars, val.res.Collect)
i++
}
// Unused attribute sets do not report.
clear(s.values)
// The delta collection cycle resets.
s.start = t
h.DataPoints = hDPts
*dest = h
return n
}
func (s *histogram[N]) cumulative(dest *metricdata.Aggregation) int {
t := now()
// If *dest is not a metricdata.Histogram, memory reuse is missed. In that
// case, use the zero-value h and hope for better alignment next cycle.
h, _ := (*dest).(metricdata.Histogram[N])
h.Temporality = metricdata.CumulativeTemporality
s.valuesMu.Lock()
defer s.valuesMu.Unlock()
// Do not allow modification of our copy of bounds.
bounds := slices.Clone(s.bounds)
n := len(s.values)
hDPts := reset(h.DataPoints, n, n)
var i int
for _, val := range s.values {
hDPts[i].Attributes = val.attrs
hDPts[i].StartTime = s.start
hDPts[i].Time = t
hDPts[i].Count = val.count
hDPts[i].Bounds = bounds
// The HistogramDataPoint field values returned need to be copies of
// the buckets value as we will keep updating them.
//
// TODO (#3047): Making copies for bounds and counts incurs a large
// memory allocation footprint. Alternatives should be explored.
hDPts[i].BucketCounts = slices.Clone(val.counts)
if !s.noSum {
hDPts[i].Sum = val.total
}
if !s.noMinMax {
hDPts[i].Min = metricdata.NewExtrema(val.min)
hDPts[i].Max = metricdata.NewExtrema(val.max)
}
collectExemplars(&hDPts[i].Exemplars, val.res.Collect)
i++
// TODO (#3006): This will use an unbounded amount of memory if there
// are unbounded number of attribute sets being aggregated. Attribute
// sets that become "stale" need to be forgotten so this will not
// overload the system.
}
h.DataPoints = hDPts
*dest = h
return n
}

View File

@ -0,0 +1,162 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package aggregate // import "go.opentelemetry.io/otel/sdk/metric/internal/aggregate"
import (
"context"
"sync"
"time"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
// datapoint is timestamped measurement data.
type datapoint[N int64 | float64] struct {
attrs attribute.Set
value N
res exemplar.FilteredReservoir[N]
}
func newLastValue[N int64 | float64](limit int, r func() exemplar.FilteredReservoir[N]) *lastValue[N] {
return &lastValue[N]{
newRes: r,
limit: newLimiter[datapoint[N]](limit),
values: make(map[attribute.Distinct]datapoint[N]),
start: now(),
}
}
// lastValue summarizes a set of measurements as the last one made.
type lastValue[N int64 | float64] struct {
sync.Mutex
newRes func() exemplar.FilteredReservoir[N]
limit limiter[datapoint[N]]
values map[attribute.Distinct]datapoint[N]
start time.Time
}
func (s *lastValue[N]) measure(ctx context.Context, value N, fltrAttr attribute.Set, droppedAttr []attribute.KeyValue) {
s.Lock()
defer s.Unlock()
attr := s.limit.Attributes(fltrAttr, s.values)
d, ok := s.values[attr.Equivalent()]
if !ok {
d.res = s.newRes()
}
d.attrs = attr
d.value = value
d.res.Offer(ctx, value, droppedAttr)
s.values[attr.Equivalent()] = d
}
func (s *lastValue[N]) delta(dest *metricdata.Aggregation) int {
t := now()
// Ignore if dest is not a metricdata.Gauge. The chance for memory reuse of
// the DataPoints is missed (better luck next time).
gData, _ := (*dest).(metricdata.Gauge[N])
s.Lock()
defer s.Unlock()
n := s.copyDpts(&gData.DataPoints, t)
// Do not report stale values.
clear(s.values)
// Update start time for delta temporality.
s.start = t
*dest = gData
return n
}
func (s *lastValue[N]) cumulative(dest *metricdata.Aggregation) int {
t := now()
// Ignore if dest is not a metricdata.Gauge. The chance for memory reuse of
// the DataPoints is missed (better luck next time).
gData, _ := (*dest).(metricdata.Gauge[N])
s.Lock()
defer s.Unlock()
n := s.copyDpts(&gData.DataPoints, t)
// TODO (#3006): This will use an unbounded amount of memory if there
// are unbounded number of attribute sets being aggregated. Attribute
// sets that become "stale" need to be forgotten so this will not
// overload the system.
*dest = gData
return n
}
// copyDpts copies the datapoints held by s into dest. The number of datapoints
// copied is returned.
func (s *lastValue[N]) copyDpts(dest *[]metricdata.DataPoint[N], t time.Time) int {
n := len(s.values)
*dest = reset(*dest, n, n)
var i int
for _, v := range s.values {
(*dest)[i].Attributes = v.attrs
(*dest)[i].StartTime = s.start
(*dest)[i].Time = t
(*dest)[i].Value = v.value
collectExemplars(&(*dest)[i].Exemplars, v.res.Collect)
i++
}
return n
}
// newPrecomputedLastValue returns an aggregator that summarizes a set of
// observations as the last one made.
func newPrecomputedLastValue[N int64 | float64](limit int, r func() exemplar.FilteredReservoir[N]) *precomputedLastValue[N] {
return &precomputedLastValue[N]{lastValue: newLastValue[N](limit, r)}
}
// precomputedLastValue summarizes a set of observations as the last one made.
type precomputedLastValue[N int64 | float64] struct {
*lastValue[N]
}
func (s *precomputedLastValue[N]) delta(dest *metricdata.Aggregation) int {
t := now()
// Ignore if dest is not a metricdata.Gauge. The chance for memory reuse of
// the DataPoints is missed (better luck next time).
gData, _ := (*dest).(metricdata.Gauge[N])
s.Lock()
defer s.Unlock()
n := s.copyDpts(&gData.DataPoints, t)
// Do not report stale values.
clear(s.values)
// Update start time for delta temporality.
s.start = t
*dest = gData
return n
}
func (s *precomputedLastValue[N]) cumulative(dest *metricdata.Aggregation) int {
t := now()
// Ignore if dest is not a metricdata.Gauge. The chance for memory reuse of
// the DataPoints is missed (better luck next time).
gData, _ := (*dest).(metricdata.Gauge[N])
s.Lock()
defer s.Unlock()
n := s.copyDpts(&gData.DataPoints, t)
// Do not report stale values.
clear(s.values)
*dest = gData
return n
}

View File

@ -0,0 +1,42 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package aggregate // import "go.opentelemetry.io/otel/sdk/metric/internal/aggregate"
import "go.opentelemetry.io/otel/attribute"
// overflowSet is the attribute set used to record a measurement when adding
// another distinct attribute set to the aggregate would exceed the aggregate
// limit.
var overflowSet = attribute.NewSet(attribute.Bool("otel.metric.overflow", true))
// limiter limits aggregate values.
type limiter[V any] struct {
// aggLimit is the maximum number of metric streams that can be aggregated.
//
// Any metric stream with attributes distinct from any set already
// aggregated once the aggLimit will be meet will instead be aggregated
// into an "overflow" metric stream. That stream will only contain the
// "otel.metric.overflow"=true attribute.
aggLimit int
}
// newLimiter returns a new Limiter with the provided aggregation limit.
func newLimiter[V any](aggregation int) limiter[V] {
return limiter[V]{aggLimit: aggregation}
}
// Attributes checks if adding a measurement for attrs will exceed the
// aggregation cardinality limit for the existing measurements. If it will,
// overflowSet is returned. Otherwise, if it will not exceed the limit, or the
// limit is not set (limit <= 0), attr is returned.
func (l limiter[V]) Attributes(attrs attribute.Set, measurements map[attribute.Distinct]V) attribute.Set {
if l.aggLimit > 0 {
_, exists := measurements[attrs.Equivalent()]
if !exists && len(measurements) >= l.aggLimit-1 {
return overflowSet
}
}
return attrs
}

View File

@ -0,0 +1,238 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package aggregate // import "go.opentelemetry.io/otel/sdk/metric/internal/aggregate"
import (
"context"
"sync"
"time"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
type sumValue[N int64 | float64] struct {
n N
res exemplar.FilteredReservoir[N]
attrs attribute.Set
}
// valueMap is the storage for sums.
type valueMap[N int64 | float64] struct {
sync.Mutex
newRes func() exemplar.FilteredReservoir[N]
limit limiter[sumValue[N]]
values map[attribute.Distinct]sumValue[N]
}
func newValueMap[N int64 | float64](limit int, r func() exemplar.FilteredReservoir[N]) *valueMap[N] {
return &valueMap[N]{
newRes: r,
limit: newLimiter[sumValue[N]](limit),
values: make(map[attribute.Distinct]sumValue[N]),
}
}
func (s *valueMap[N]) measure(ctx context.Context, value N, fltrAttr attribute.Set, droppedAttr []attribute.KeyValue) {
s.Lock()
defer s.Unlock()
attr := s.limit.Attributes(fltrAttr, s.values)
v, ok := s.values[attr.Equivalent()]
if !ok {
v.res = s.newRes()
}
v.attrs = attr
v.n += value
v.res.Offer(ctx, value, droppedAttr)
s.values[attr.Equivalent()] = v
}
// newSum returns an aggregator that summarizes a set of measurements as their
// arithmetic sum. Each sum is scoped by attributes and the aggregation cycle
// the measurements were made in.
func newSum[N int64 | float64](monotonic bool, limit int, r func() exemplar.FilteredReservoir[N]) *sum[N] {
return &sum[N]{
valueMap: newValueMap[N](limit, r),
monotonic: monotonic,
start: now(),
}
}
// sum summarizes a set of measurements made as their arithmetic sum.
type sum[N int64 | float64] struct {
*valueMap[N]
monotonic bool
start time.Time
}
func (s *sum[N]) delta(dest *metricdata.Aggregation) int {
t := now()
// If *dest is not a metricdata.Sum, memory reuse is missed. In that case,
// use the zero-value sData and hope for better alignment next cycle.
sData, _ := (*dest).(metricdata.Sum[N])
sData.Temporality = metricdata.DeltaTemporality
sData.IsMonotonic = s.monotonic
s.Lock()
defer s.Unlock()
n := len(s.values)
dPts := reset(sData.DataPoints, n, n)
var i int
for _, val := range s.values {
dPts[i].Attributes = val.attrs
dPts[i].StartTime = s.start
dPts[i].Time = t
dPts[i].Value = val.n
collectExemplars(&dPts[i].Exemplars, val.res.Collect)
i++
}
// Do not report stale values.
clear(s.values)
// The delta collection cycle resets.
s.start = t
sData.DataPoints = dPts
*dest = sData
return n
}
func (s *sum[N]) cumulative(dest *metricdata.Aggregation) int {
t := now()
// If *dest is not a metricdata.Sum, memory reuse is missed. In that case,
// use the zero-value sData and hope for better alignment next cycle.
sData, _ := (*dest).(metricdata.Sum[N])
sData.Temporality = metricdata.CumulativeTemporality
sData.IsMonotonic = s.monotonic
s.Lock()
defer s.Unlock()
n := len(s.values)
dPts := reset(sData.DataPoints, n, n)
var i int
for _, value := range s.values {
dPts[i].Attributes = value.attrs
dPts[i].StartTime = s.start
dPts[i].Time = t
dPts[i].Value = value.n
collectExemplars(&dPts[i].Exemplars, value.res.Collect)
// TODO (#3006): This will use an unbounded amount of memory if there
// are unbounded number of attribute sets being aggregated. Attribute
// sets that become "stale" need to be forgotten so this will not
// overload the system.
i++
}
sData.DataPoints = dPts
*dest = sData
return n
}
// newPrecomputedSum returns an aggregator that summarizes a set of
// observatrions as their arithmetic sum. Each sum is scoped by attributes and
// the aggregation cycle the measurements were made in.
func newPrecomputedSum[N int64 | float64](monotonic bool, limit int, r func() exemplar.FilteredReservoir[N]) *precomputedSum[N] {
return &precomputedSum[N]{
valueMap: newValueMap[N](limit, r),
monotonic: monotonic,
start: now(),
}
}
// precomputedSum summarizes a set of observatrions as their arithmetic sum.
type precomputedSum[N int64 | float64] struct {
*valueMap[N]
monotonic bool
start time.Time
reported map[attribute.Distinct]N
}
func (s *precomputedSum[N]) delta(dest *metricdata.Aggregation) int {
t := now()
newReported := make(map[attribute.Distinct]N)
// If *dest is not a metricdata.Sum, memory reuse is missed. In that case,
// use the zero-value sData and hope for better alignment next cycle.
sData, _ := (*dest).(metricdata.Sum[N])
sData.Temporality = metricdata.DeltaTemporality
sData.IsMonotonic = s.monotonic
s.Lock()
defer s.Unlock()
n := len(s.values)
dPts := reset(sData.DataPoints, n, n)
var i int
for key, value := range s.values {
delta := value.n - s.reported[key]
dPts[i].Attributes = value.attrs
dPts[i].StartTime = s.start
dPts[i].Time = t
dPts[i].Value = delta
collectExemplars(&dPts[i].Exemplars, value.res.Collect)
newReported[key] = value.n
i++
}
// Unused attribute sets do not report.
clear(s.values)
s.reported = newReported
// The delta collection cycle resets.
s.start = t
sData.DataPoints = dPts
*dest = sData
return n
}
func (s *precomputedSum[N]) cumulative(dest *metricdata.Aggregation) int {
t := now()
// If *dest is not a metricdata.Sum, memory reuse is missed. In that case,
// use the zero-value sData and hope for better alignment next cycle.
sData, _ := (*dest).(metricdata.Sum[N])
sData.Temporality = metricdata.CumulativeTemporality
sData.IsMonotonic = s.monotonic
s.Lock()
defer s.Unlock()
n := len(s.values)
dPts := reset(sData.DataPoints, n, n)
var i int
for _, val := range s.values {
dPts[i].Attributes = val.attrs
dPts[i].StartTime = s.start
dPts[i].Time = t
dPts[i].Value = val.n
collectExemplars(&dPts[i].Exemplars, val.res.Collect)
i++
}
// Unused attribute sets do not report.
clear(s.values)
sData.DataPoints = dPts
*dest = sData
return n
}

View File

@ -0,0 +1,6 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package exemplar provides an implementation of the OpenTelemetry exemplar
// reservoir to be used in metric collection pipelines.
package exemplar // import "go.opentelemetry.io/otel/sdk/metric/internal/exemplar"

View File

@ -0,0 +1,23 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package exemplar // import "go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
import (
"context"
"go.opentelemetry.io/otel/attribute"
)
// Drop returns a [FilteredReservoir] that drops all measurements it is offered.
func Drop[N int64 | float64]() FilteredReservoir[N] { return &dropRes[N]{} }
type dropRes[N int64 | float64] struct{}
// Offer does nothing, all measurements offered will be dropped.
func (r *dropRes[N]) Offer(context.Context, N, []attribute.KeyValue) {}
// Collect resets dest. No exemplars will ever be returned.
func (r *dropRes[N]) Collect(dest *[]Exemplar) {
*dest = (*dest)[:0]
}

View File

@ -0,0 +1,29 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package exemplar // import "go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
import (
"time"
"go.opentelemetry.io/otel/attribute"
)
// Exemplar is a measurement sampled from a timeseries providing a typical
// example.
type Exemplar struct {
// FilteredAttributes are the attributes recorded with the measurement but
// filtered out of the timeseries' aggregated data.
FilteredAttributes []attribute.KeyValue
// Time is the time when the measurement was recorded.
Time time.Time
// Value is the measured value.
Value Value
// SpanID is the ID of the span that was active during the measurement. If
// no span was active or the span was not sampled this will be empty.
SpanID []byte `json:",omitempty"`
// TraceID is the ID of the trace the active span belonged to during the
// measurement. If no span was active or the span was not sampled this will
// be empty.
TraceID []byte `json:",omitempty"`
}

View File

@ -0,0 +1,29 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package exemplar // import "go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
import (
"context"
"go.opentelemetry.io/otel/trace"
)
// Filter determines if a measurement should be offered.
//
// The passed ctx needs to contain any baggage or span that were active
// when the measurement was made. This information may be used by the
// Reservoir in making a sampling decision.
type Filter func(context.Context) bool
// SampledFilter is a [Filter] that will only offer measurements
// if the passed context associated with the measurement contains a sampled
// [go.opentelemetry.io/otel/trace.SpanContext].
func SampledFilter(ctx context.Context) bool {
return trace.SpanContextFromContext(ctx).IsSampled()
}
// AlwaysOnFilter is a [Filter] that always offers measurements.
func AlwaysOnFilter(ctx context.Context) bool {
return true
}

View File

@ -0,0 +1,49 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package exemplar // import "go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
import (
"context"
"time"
"go.opentelemetry.io/otel/attribute"
)
// FilteredReservoir wraps a [Reservoir] with a filter.
type FilteredReservoir[N int64 | float64] interface {
// Offer accepts the parameters associated with a measurement. The
// parameters will be stored as an exemplar if the filter decides to
// sample the measurement.
//
// The passed ctx needs to contain any baggage or span that were active
// when the measurement was made. This information may be used by the
// Reservoir in making a sampling decision.
Offer(ctx context.Context, val N, attr []attribute.KeyValue)
// Collect returns all the held exemplars in the reservoir.
Collect(dest *[]Exemplar)
}
// filteredReservoir handles the pre-sampled exemplar of measurements made.
type filteredReservoir[N int64 | float64] struct {
filter Filter
reservoir Reservoir
}
// NewFilteredReservoir creates a [FilteredReservoir] which only offers values
// that are allowed by the filter.
func NewFilteredReservoir[N int64 | float64](f Filter, r Reservoir) FilteredReservoir[N] {
return &filteredReservoir[N]{
filter: f,
reservoir: r,
}
}
func (f *filteredReservoir[N]) Offer(ctx context.Context, val N, attr []attribute.KeyValue) {
if f.filter(ctx) {
// only record the current time if we are sampling this measurment.
f.reservoir.Offer(ctx, time.Now(), NewValue(val), attr)
}
}
func (f *filteredReservoir[N]) Collect(dest *[]Exemplar) { f.reservoir.Collect(dest) }

View File

@ -0,0 +1,46 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package exemplar // import "go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
import (
"context"
"slices"
"sort"
"time"
"go.opentelemetry.io/otel/attribute"
)
// Histogram returns a [Reservoir] that samples the last measurement that falls
// within a histogram bucket. The histogram bucket upper-boundaries are define
// by bounds.
//
// The passed bounds will be sorted by this function.
func Histogram(bounds []float64) Reservoir {
slices.Sort(bounds)
return &histRes{
bounds: bounds,
storage: newStorage(len(bounds) + 1),
}
}
type histRes struct {
*storage
// bounds are bucket bounds in ascending order.
bounds []float64
}
func (r *histRes) Offer(ctx context.Context, t time.Time, v Value, a []attribute.KeyValue) {
var x float64
switch v.Type() {
case Int64ValueType:
x = float64(v.Int64())
case Float64ValueType:
x = v.Float64()
default:
panic("unknown value type")
}
r.store[sort.SearchFloat64s(r.bounds, x)] = newMeasurement(ctx, t, v, a)
}

View File

@ -0,0 +1,191 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package exemplar // import "go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
import (
"context"
"math"
"math/rand"
"sync"
"time"
"go.opentelemetry.io/otel/attribute"
)
var (
// rng is used to make sampling decisions.
//
// Do not use crypto/rand. There is no reason for the decrease in performance
// given this is not a security sensitive decision.
rng = rand.New(rand.NewSource(time.Now().UnixNano()))
// Ensure concurrent safe accecess to rng and its underlying source.
rngMu sync.Mutex
)
// random returns, as a float64, a uniform pseudo-random number in the open
// interval (0.0,1.0).
func random() float64 {
// TODO: This does not return a uniform number. rng.Float64 returns a
// uniformly random int in [0,2^53) that is divided by 2^53. Meaning it
// returns multiples of 2^-53, and not all floating point numbers between 0
// and 1 (i.e. for values less than 2^-4 the 4 last bits of the significand
// are always going to be 0).
//
// An alternative algorithm should be considered that will actually return
// a uniform number in the interval (0,1). For example, since the default
// rand source provides a uniform distribution for Int63, this can be
// converted following the prototypical code of Mersenne Twister 64 (Takuji
// Nishimura and Makoto Matsumoto:
// http://www.math.sci.hiroshima-u.ac.jp/m-mat/MT/VERSIONS/C-LANG/mt19937-64.c)
//
// (float64(rng.Int63()>>11) + 0.5) * (1.0 / 4503599627370496.0)
//
// There are likely many other methods to explore here as well.
rngMu.Lock()
defer rngMu.Unlock()
f := rng.Float64()
for f == 0 {
f = rng.Float64()
}
return f
}
// FixedSize returns a [Reservoir] that samples at most k exemplars. If there
// are k or less measurements made, the Reservoir will sample each one. If
// there are more than k, the Reservoir will then randomly sample all
// additional measurement with a decreasing probability.
func FixedSize(k int) Reservoir {
r := &randRes{storage: newStorage(k)}
r.reset()
return r
}
type randRes struct {
*storage
// count is the number of measurement seen.
count int64
// next is the next count that will store a measurement at a random index
// once the reservoir has been filled.
next int64
// w is the largest random number in a distribution that is used to compute
// the next next.
w float64
}
func (r *randRes) Offer(ctx context.Context, t time.Time, n Value, a []attribute.KeyValue) {
// The following algorithm is "Algorithm L" from Li, Kim-Hung (4 December
// 1994). "Reservoir-Sampling Algorithms of Time Complexity
// O(n(1+log(N/n)))". ACM Transactions on Mathematical Software. 20 (4):
// 481493 (https://dl.acm.org/doi/10.1145/198429.198435).
//
// A high-level overview of "Algorithm L":
// 0) Pre-calculate the random count greater than the storage size when
// an exemplar will be replaced.
// 1) Accept all measurements offered until the configured storage size is
// reached.
// 2) Loop:
// a) When the pre-calculate count is reached, replace a random
// existing exemplar with the offered measurement.
// b) Calculate the next random count greater than the existing one
// which will replace another exemplars
//
// The way a "replacement" count is computed is by looking at `n` number of
// independent random numbers each corresponding to an offered measurement.
// Of these numbers the smallest `k` (the same size as the storage
// capacity) of them are kept as a subset. The maximum value in this
// subset, called `w` is used to weight another random number generation
// for the next count that will be considered.
//
// By weighting the next count computation like described, it is able to
// perform a uniformly-weighted sampling algorithm based on the number of
// samples the reservoir has seen so far. The sampling will "slow down" as
// more and more samples are offered so as to reduce a bias towards those
// offered just prior to the end of the collection.
//
// This algorithm is preferred because of its balance of simplicity and
// performance. It will compute three random numbers (the bulk of
// computation time) for each item that becomes part of the reservoir, but
// it does not spend any time on items that do not. In particular it has an
// asymptotic runtime of O(k(1 + log(n/k)) where n is the number of
// measurements offered and k is the reservoir size.
//
// See https://en.wikipedia.org/wiki/Reservoir_sampling for an overview of
// this and other reservoir sampling algorithms. See
// https://github.com/MrAlias/reservoir-sampling for a performance
// comparison of reservoir sampling algorithms.
if int(r.count) < cap(r.store) {
r.store[r.count] = newMeasurement(ctx, t, n, a)
} else {
if r.count == r.next {
// Overwrite a random existing measurement with the one offered.
idx := int(rng.Int63n(int64(cap(r.store))))
r.store[idx] = newMeasurement(ctx, t, n, a)
r.advance()
}
}
r.count++
}
// reset resets r to the initial state.
func (r *randRes) reset() {
// This resets the number of exemplars known.
r.count = 0
// Random index inserts should only happen after the storage is full.
r.next = int64(cap(r.store))
// Initial random number in the series used to generate r.next.
//
// This is set before r.advance to reset or initialize the random number
// series. Without doing so it would always be 0 or never restart a new
// random number series.
//
// This maps the uniform random number in (0,1) to a geometric distribution
// over the same interval. The mean of the distribution is inversely
// proportional to the storage capacity.
r.w = math.Exp(math.Log(random()) / float64(cap(r.store)))
r.advance()
}
// advance updates the count at which the offered measurement will overwrite an
// existing exemplar.
func (r *randRes) advance() {
// Calculate the next value in the random number series.
//
// The current value of r.w is based on the max of a distribution of random
// numbers (i.e. `w = max(u_1,u_2,...,u_k)` for `k` equal to the capacity
// of the storage and each `u` in the interval (0,w)). To calculate the
// next r.w we use the fact that when the next exemplar is selected to be
// included in the storage an existing one will be dropped, and the
// corresponding random number in the set used to calculate r.w will also
// be replaced. The replacement random number will also be within (0,w),
// therefore the next r.w will be based on the same distribution (i.e.
// `max(u_1,u_2,...,u_k)`). Therefore, we can sample the next r.w by
// computing the next random number `u` and take r.w as `w * u^(1/k)`.
r.w *= math.Exp(math.Log(random()) / float64(cap(r.store)))
// Use the new random number in the series to calculate the count of the
// next measurement that will be stored.
//
// Given 0 < r.w < 1, each iteration will result in subsequent r.w being
// smaller. This translates here into the next next being selected against
// a distribution with a higher mean (i.e. the expected value will increase
// and replacements become less likely)
//
// Important to note, the new r.next will always be at least 1 more than
// the last r.next.
r.next += int64(math.Log(random())/math.Log(1-r.w)) + 1
}
func (r *randRes) Collect(dest *[]Exemplar) {
r.storage.Collect(dest)
// Call reset here even though it will reset r.count and restart the random
// number series. This will persist any old exemplars as long as no new
// measurements are offered, but it will also prioritize those new
// measurements that are made over the older collection cycle ones.
r.reset()
}

View File

@ -0,0 +1,32 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package exemplar // import "go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
import (
"context"
"time"
"go.opentelemetry.io/otel/attribute"
)
// Reservoir holds the sampled exemplar of measurements made.
type Reservoir interface {
// Offer accepts the parameters associated with a measurement. The
// parameters will be stored as an exemplar if the Reservoir decides to
// sample the measurement.
//
// The passed ctx needs to contain any baggage or span that were active
// when the measurement was made. This information may be used by the
// Reservoir in making a sampling decision.
//
// The time t is the time when the measurement was made. The val and attr
// parameters are the value and dropped (filtered) attributes of the
// measurement respectively.
Offer(ctx context.Context, t time.Time, val Value, attr []attribute.KeyValue)
// Collect returns all the held exemplars.
//
// The Reservoir state is preserved after this call.
Collect(dest *[]Exemplar)
}

View File

@ -0,0 +1,95 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package exemplar // import "go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
import (
"context"
"time"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
// storage is an exemplar storage for [Reservoir] implementations.
type storage struct {
// store are the measurements sampled.
//
// This does not use []metricdata.Exemplar because it potentially would
// require an allocation for trace and span IDs in the hot path of Offer.
store []measurement
}
func newStorage(n int) *storage {
return &storage{store: make([]measurement, n)}
}
// Collect returns all the held exemplars.
//
// The Reservoir state is preserved after this call.
func (r *storage) Collect(dest *[]Exemplar) {
*dest = reset(*dest, len(r.store), len(r.store))
var n int
for _, m := range r.store {
if !m.valid {
continue
}
m.Exemplar(&(*dest)[n])
n++
}
*dest = (*dest)[:n]
}
// measurement is a measurement made by a telemetry system.
type measurement struct {
// FilteredAttributes are the attributes dropped during the measurement.
FilteredAttributes []attribute.KeyValue
// Time is the time when the measurement was made.
Time time.Time
// Value is the value of the measurement.
Value Value
// SpanContext is the SpanContext active when a measurement was made.
SpanContext trace.SpanContext
valid bool
}
// newMeasurement returns a new non-empty Measurement.
func newMeasurement(ctx context.Context, ts time.Time, v Value, droppedAttr []attribute.KeyValue) measurement {
return measurement{
FilteredAttributes: droppedAttr,
Time: ts,
Value: v,
SpanContext: trace.SpanContextFromContext(ctx),
valid: true,
}
}
// Exemplar returns m as an [Exemplar].
func (m measurement) Exemplar(dest *Exemplar) {
dest.FilteredAttributes = m.FilteredAttributes
dest.Time = m.Time
dest.Value = m.Value
if m.SpanContext.HasTraceID() {
traceID := m.SpanContext.TraceID()
dest.TraceID = traceID[:]
} else {
dest.TraceID = dest.TraceID[:0]
}
if m.SpanContext.HasSpanID() {
spanID := m.SpanContext.SpanID()
dest.SpanID = spanID[:]
} else {
dest.SpanID = dest.SpanID[:0]
}
}
func reset[T any](s []T, length, capacity int) []T {
if cap(s) < capacity {
return make([]T, length, capacity)
}
return s[:length]
}

View File

@ -0,0 +1,57 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package exemplar // import "go.opentelemetry.io/otel/sdk/metric/internal/exemplar"
import "math"
// ValueType identifies the type of value used in exemplar data.
type ValueType uint8
const (
// UnknownValueType should not be used. It represents a misconfigured
// Value.
UnknownValueType ValueType = 0
// Int64ValueType represents a Value with int64 data.
Int64ValueType ValueType = 1
// Float64ValueType represents a Value with float64 data.
Float64ValueType ValueType = 2
)
// Value is the value of data held by an exemplar.
type Value struct {
t ValueType
val uint64
}
// NewValue returns a new [Value] for the provided value.
func NewValue[N int64 | float64](value N) Value {
switch v := any(value).(type) {
case int64:
return Value{t: Int64ValueType, val: uint64(v)}
case float64:
return Value{t: Float64ValueType, val: math.Float64bits(v)}
}
return Value{}
}
// Type returns the [ValueType] of data held by v.
func (v Value) Type() ValueType { return v.t }
// Int64 returns the value of v as an int64. If the ValueType of v is not an
// Int64ValueType, 0 is returned.
func (v Value) Int64() int64 {
if v.t == Int64ValueType {
return int64(v.val)
}
return 0
}
// Float64 returns the value of v as an float64. If the ValueType of v is not
// an Float64ValueType, 0 is returned.
func (v Value) Float64() float64 {
if v.t == Float64ValueType {
return math.Float64frombits(v.val)
}
return 0
}

View File

@ -0,0 +1,13 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package internal // import "go.opentelemetry.io/otel/sdk/metric/internal"
// ReuseSlice returns a zeroed view of slice if its capacity is greater than or
// equal to n. Otherwise, it returns a new []T with capacity equal to n.
func ReuseSlice[T any](slice []T, n int) []T {
if cap(slice) >= n {
return slice[:n]
}
return make([]T, n)
}

View File

@ -0,0 +1,112 @@
# Experimental Features
The metric SDK contains features that have not yet stabilized in the OpenTelemetry specification.
These features are added to the OpenTelemetry Go metric SDK prior to stabilization in the specification so that users can start experimenting with them and provide feedback.
These feature may change in backwards incompatible ways as feedback is applied.
See the [Compatibility and Stability](#compatibility-and-stability) section for more information.
## Features
- [Cardinality Limit](#cardinality-limit)
- [Exemplars](#exemplars)
### Cardinality Limit
The cardinality limit is the hard limit on the number of metric streams that can be collected for a single instrument.
This experimental feature can be enabled by setting the `OTEL_GO_X_CARDINALITY_LIMIT` environment value.
The value must be an integer value.
All other values are ignored.
If the value set is less than or equal to `0`, no limit will be applied.
#### Examples
Set the cardinality limit to 2000.
```console
export OTEL_GO_X_CARDINALITY_LIMIT=2000
```
Set an infinite cardinality limit (functionally equivalent to disabling the feature).
```console
export OTEL_GO_X_CARDINALITY_LIMIT=-1
```
Disable the cardinality limit.
```console
unset OTEL_GO_X_CARDINALITY_LIMIT
```
### Exemplars
A sample of measurements made may be exported directly as a set of exemplars.
This experimental feature can be enabled by setting the `OTEL_GO_X_EXEMPLAR` environment variable.
The value of must be the case-insensitive string of `"true"` to enable the feature.
All other values are ignored.
Exemplar filters are a supported.
The exemplar filter applies to all measurements made.
They filter these measurements, only allowing certain measurements to be passed to the underlying exemplar reservoir.
To change the exemplar filter from the default `"trace_based"` filter set the `OTEL_METRICS_EXEMPLAR_FILTER` environment variable.
The value must be the case-sensitive string defined by the [OpenTelemetry specification].
- `"always_on"`: allows all measurements
- `"always_off"`: denies all measurements
- `"trace_based"`: allows only sampled measurements
All values other than these will result in the default, `"trace_based"`, exemplar filter being used.
[OpenTelemetry specification]: https://github.com/open-telemetry/opentelemetry-specification/blob/a6ca2fd484c9e76fe1d8e1c79c99f08f4745b5ee/specification/configuration/sdk-environment-variables.md#exemplar
#### Examples
Enable exemplars to be exported.
```console
export OTEL_GO_X_EXEMPLAR=true
```
Disable exemplars from being exported.
```console
unset OTEL_GO_X_EXEMPLAR
```
Set the exemplar filter to allow all measurements.
```console
export OTEL_METRICS_EXEMPLAR_FILTER=always_on
```
Set the exemplar filter to deny all measurements.
```console
export OTEL_METRICS_EXEMPLAR_FILTER=always_off
```
Set the exemplar filter to only allow sampled measurements.
```console
export OTEL_METRICS_EXEMPLAR_FILTER=trace_based
```
Revert to the default exemplar filter (`"trace_based"`)
```console
unset OTEL_METRICS_EXEMPLAR_FILTER
```
## Compatibility and Stability
Experimental features do not fall within the scope of the OpenTelemetry Go versioning and stability [policy](../../../../VERSIONING.md).
These features may be removed or modified in successive version releases, including patch versions.
When an experimental feature is promoted to a stable feature, a migration path will be included in the changelog entry of the release.
There is no guarantee that any environment variable feature flags that enabled the experimental feature will be supported by the stable version.
If they are supported, they may be accompanied with a deprecation notice stating a timeline for the removal of that support.

View File

@ -0,0 +1,85 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package x contains support for OTel metric SDK experimental features.
//
// This package should only be used for features defined in the specification.
// It should not be used for experiments or new project ideas.
package x // import "go.opentelemetry.io/otel/sdk/metric/internal/x"
import (
"os"
"strconv"
"strings"
)
var (
// Exemplars is an experimental feature flag that defines if exemplars
// should be recorded for metric data-points.
//
// To enable this feature set the OTEL_GO_X_EXEMPLAR environment variable
// to the case-insensitive string value of "true" (i.e. "True" and "TRUE"
// will also enable this).
Exemplars = newFeature("EXEMPLAR", func(v string) (string, bool) {
if strings.ToLower(v) == "true" {
return v, true
}
return "", false
})
// CardinalityLimit is an experimental feature flag that defines if
// cardinality limits should be applied to the recorded metric data-points.
//
// To enable this feature set the OTEL_GO_X_CARDINALITY_LIMIT environment
// variable to the integer limit value you want to use.
//
// Setting OTEL_GO_X_CARDINALITY_LIMIT to a value less than or equal to 0
// will disable the cardinality limits.
CardinalityLimit = newFeature("CARDINALITY_LIMIT", func(v string) (int, bool) {
n, err := strconv.Atoi(v)
if err != nil {
return 0, false
}
return n, true
})
)
// Feature is an experimental feature control flag. It provides a uniform way
// to interact with these feature flags and parse their values.
type Feature[T any] struct {
key string
parse func(v string) (T, bool)
}
func newFeature[T any](suffix string, parse func(string) (T, bool)) Feature[T] {
const envKeyRoot = "OTEL_GO_X_"
return Feature[T]{
key: envKeyRoot + suffix,
parse: parse,
}
}
// Key returns the environment variable key that needs to be set to enable the
// feature.
func (f Feature[T]) Key() string { return f.key }
// Lookup returns the user configured value for the feature and true if the
// user has enabled the feature. Otherwise, if the feature is not enabled, a
// zero-value and false are returned.
func (f Feature[T]) Lookup() (v T, ok bool) {
// https://github.com/open-telemetry/opentelemetry-specification/blob/62effed618589a0bec416a87e559c0a9d96289bb/specification/configuration/sdk-environment-variables.md#parsing-empty-value
//
// > The SDK MUST interpret an empty value of an environment variable the
// > same way as when the variable is unset.
vRaw := os.Getenv(f.key)
if vRaw == "" {
return v, ok
}
return f.parse(vRaw)
}
// Enabled returns if the feature is enabled.
func (f Feature[T]) Enabled() bool {
_, ok := f.Lookup()
return ok
}

View File

@ -0,0 +1,203 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"context"
"errors"
"fmt"
"sync"
"sync/atomic"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
// ManualReader is a simple Reader that allows an application to
// read metrics on demand.
type ManualReader struct {
sdkProducer atomic.Value
shutdownOnce sync.Once
mu sync.Mutex
isShutdown bool
externalProducers atomic.Value
temporalitySelector TemporalitySelector
aggregationSelector AggregationSelector
}
// Compile time check the manualReader implements Reader and is comparable.
var _ = map[Reader]struct{}{&ManualReader{}: {}}
// NewManualReader returns a Reader which is directly called to collect metrics.
func NewManualReader(opts ...ManualReaderOption) *ManualReader {
cfg := newManualReaderConfig(opts)
r := &ManualReader{
temporalitySelector: cfg.temporalitySelector,
aggregationSelector: cfg.aggregationSelector,
}
r.externalProducers.Store(cfg.producers)
return r
}
// register stores the sdkProducer which enables the caller
// to read metrics from the SDK on demand.
func (mr *ManualReader) register(p sdkProducer) {
// Only register once. If producer is already set, do nothing.
if !mr.sdkProducer.CompareAndSwap(nil, produceHolder{produce: p.produce}) {
msg := "did not register manual reader"
global.Error(errDuplicateRegister, msg)
}
}
// temporality reports the Temporality for the instrument kind provided.
func (mr *ManualReader) temporality(kind InstrumentKind) metricdata.Temporality {
return mr.temporalitySelector(kind)
}
// aggregation returns what Aggregation to use for kind.
func (mr *ManualReader) aggregation(kind InstrumentKind) Aggregation { // nolint:revive // import-shadow for method scoped by type.
return mr.aggregationSelector(kind)
}
// Shutdown closes any connections and frees any resources used by the reader.
//
// This method is safe to call concurrently.
func (mr *ManualReader) Shutdown(context.Context) error {
err := ErrReaderShutdown
mr.shutdownOnce.Do(func() {
// Any future call to Collect will now return ErrReaderShutdown.
mr.sdkProducer.Store(produceHolder{
produce: shutdownProducer{}.produce,
})
mr.mu.Lock()
defer mr.mu.Unlock()
mr.isShutdown = true
// release references to Producer(s)
mr.externalProducers.Store([]Producer{})
err = nil
})
return err
}
// Collect gathers all metric data related to the Reader from
// the SDK and other Producers and stores the result in rm.
//
// Collect will return an error if called after shutdown.
// Collect will return an error if rm is a nil ResourceMetrics.
// Collect will return an error if the context's Done channel is closed.
//
// This method is safe to call concurrently.
func (mr *ManualReader) Collect(ctx context.Context, rm *metricdata.ResourceMetrics) error {
if rm == nil {
return errors.New("manual reader: *metricdata.ResourceMetrics is nil")
}
p := mr.sdkProducer.Load()
if p == nil {
return ErrReaderNotRegistered
}
ph, ok := p.(produceHolder)
if !ok {
// The atomic.Value is entirely in the periodicReader's control so
// this should never happen. In the unforeseen case that this does
// happen, return an error instead of panicking so a users code does
// not halt in the processes.
err := fmt.Errorf("manual reader: invalid producer: %T", p)
return err
}
err := ph.produce(ctx, rm)
if err != nil {
return err
}
var errs []error
for _, producer := range mr.externalProducers.Load().([]Producer) {
externalMetrics, err := producer.Produce(ctx)
if err != nil {
errs = append(errs, err)
}
rm.ScopeMetrics = append(rm.ScopeMetrics, externalMetrics...)
}
global.Debug("ManualReader collection", "Data", rm)
return unifyErrors(errs)
}
// MarshalLog returns logging data about the ManualReader.
func (r *ManualReader) MarshalLog() interface{} {
r.mu.Lock()
down := r.isShutdown
r.mu.Unlock()
return struct {
Type string
Registered bool
Shutdown bool
}{
Type: "ManualReader",
Registered: r.sdkProducer.Load() != nil,
Shutdown: down,
}
}
// manualReaderConfig contains configuration options for a ManualReader.
type manualReaderConfig struct {
temporalitySelector TemporalitySelector
aggregationSelector AggregationSelector
producers []Producer
}
// newManualReaderConfig returns a manualReaderConfig configured with options.
func newManualReaderConfig(opts []ManualReaderOption) manualReaderConfig {
cfg := manualReaderConfig{
temporalitySelector: DefaultTemporalitySelector,
aggregationSelector: DefaultAggregationSelector,
}
for _, opt := range opts {
cfg = opt.applyManual(cfg)
}
return cfg
}
// ManualReaderOption applies a configuration option value to a ManualReader.
type ManualReaderOption interface {
applyManual(manualReaderConfig) manualReaderConfig
}
// WithTemporalitySelector sets the TemporalitySelector a reader will use to
// determine the Temporality of an instrument based on its kind. If this
// option is not used, the reader will use the DefaultTemporalitySelector.
func WithTemporalitySelector(selector TemporalitySelector) ManualReaderOption {
return temporalitySelectorOption{selector: selector}
}
type temporalitySelectorOption struct {
selector func(instrument InstrumentKind) metricdata.Temporality
}
// applyManual returns a manualReaderConfig with option applied.
func (t temporalitySelectorOption) applyManual(mrc manualReaderConfig) manualReaderConfig {
mrc.temporalitySelector = t.selector
return mrc
}
// WithAggregationSelector sets the AggregationSelector a reader will use to
// determine the aggregation to use for an instrument based on its kind. If
// this option is not used, the reader will use the DefaultAggregationSelector
// or the aggregation explicitly passed for a view matching an instrument.
func WithAggregationSelector(selector AggregationSelector) ManualReaderOption {
return aggregationSelectorOption{selector: selector}
}
type aggregationSelectorOption struct {
selector AggregationSelector
}
// applyManual returns a manualReaderConfig with option applied.
func (t aggregationSelectorOption) applyManual(c manualReaderConfig) manualReaderConfig {
c.aggregationSelector = t.selector
return c
}

709
vendor/go.opentelemetry.io/otel/sdk/metric/meter.go generated vendored Normal file
View File

@ -0,0 +1,709 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"context"
"errors"
"fmt"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/metric/embedded"
"go.opentelemetry.io/otel/sdk/instrumentation"
"go.opentelemetry.io/otel/sdk/metric/internal/aggregate"
)
// ErrInstrumentName indicates the created instrument has an invalid name.
// Valid names must consist of 255 or fewer characters including alphanumeric, _, ., -, / and start with a letter.
var ErrInstrumentName = errors.New("invalid instrument name")
// meter handles the creation and coordination of all metric instruments. A
// meter represents a single instrumentation scope; all metric telemetry
// produced by an instrumentation scope will use metric instruments from a
// single meter.
type meter struct {
embedded.Meter
scope instrumentation.Scope
pipes pipelines
int64Insts *cacheWithErr[instID, *int64Inst]
float64Insts *cacheWithErr[instID, *float64Inst]
int64ObservableInsts *cacheWithErr[instID, int64Observable]
float64ObservableInsts *cacheWithErr[instID, float64Observable]
int64Resolver resolver[int64]
float64Resolver resolver[float64]
}
func newMeter(s instrumentation.Scope, p pipelines) *meter {
// viewCache ensures instrument conflicts, including number conflicts, this
// meter is asked to create are logged to the user.
var viewCache cache[string, instID]
var int64Insts cacheWithErr[instID, *int64Inst]
var float64Insts cacheWithErr[instID, *float64Inst]
var int64ObservableInsts cacheWithErr[instID, int64Observable]
var float64ObservableInsts cacheWithErr[instID, float64Observable]
return &meter{
scope: s,
pipes: p,
int64Insts: &int64Insts,
float64Insts: &float64Insts,
int64ObservableInsts: &int64ObservableInsts,
float64ObservableInsts: &float64ObservableInsts,
int64Resolver: newResolver[int64](p, &viewCache),
float64Resolver: newResolver[float64](p, &viewCache),
}
}
// Compile-time check meter implements metric.Meter.
var _ metric.Meter = (*meter)(nil)
// Int64Counter returns a new instrument identified by name and configured with
// options. The instrument is used to synchronously record increasing int64
// measurements during a computational operation.
func (m *meter) Int64Counter(name string, options ...metric.Int64CounterOption) (metric.Int64Counter, error) {
cfg := metric.NewInt64CounterConfig(options...)
const kind = InstrumentKindCounter
p := int64InstProvider{m}
i, err := p.lookup(kind, name, cfg.Description(), cfg.Unit())
if err != nil {
return i, err
}
return i, validateInstrumentName(name)
}
// Int64UpDownCounter returns a new instrument identified by name and
// configured with options. The instrument is used to synchronously record
// int64 measurements during a computational operation.
func (m *meter) Int64UpDownCounter(name string, options ...metric.Int64UpDownCounterOption) (metric.Int64UpDownCounter, error) {
cfg := metric.NewInt64UpDownCounterConfig(options...)
const kind = InstrumentKindUpDownCounter
p := int64InstProvider{m}
i, err := p.lookup(kind, name, cfg.Description(), cfg.Unit())
if err != nil {
return i, err
}
return i, validateInstrumentName(name)
}
// Int64Histogram returns a new instrument identified by name and configured
// with options. The instrument is used to synchronously record the
// distribution of int64 measurements during a computational operation.
func (m *meter) Int64Histogram(name string, options ...metric.Int64HistogramOption) (metric.Int64Histogram, error) {
cfg := metric.NewInt64HistogramConfig(options...)
p := int64InstProvider{m}
i, err := p.lookupHistogram(name, cfg)
if err != nil {
return i, err
}
return i, validateInstrumentName(name)
}
// Int64Gauge returns a new instrument identified by name and configured
// with options. The instrument is used to synchronously record the
// distribution of int64 measurements during a computational operation.
func (m *meter) Int64Gauge(name string, options ...metric.Int64GaugeOption) (metric.Int64Gauge, error) {
cfg := metric.NewInt64GaugeConfig(options...)
const kind = InstrumentKindGauge
p := int64InstProvider{m}
i, err := p.lookup(kind, name, cfg.Description(), cfg.Unit())
if err != nil {
return i, err
}
return i, validateInstrumentName(name)
}
// int64ObservableInstrument returns a new observable identified by the Instrument.
// It registers callbacks for each reader's pipeline.
func (m *meter) int64ObservableInstrument(id Instrument, callbacks []metric.Int64Callback) (int64Observable, error) {
key := instID{
Name: id.Name,
Description: id.Description,
Unit: id.Unit,
Kind: id.Kind,
}
if m.int64ObservableInsts.HasKey(key) && len(callbacks) > 0 {
warnRepeatedObservableCallbacks(id)
}
return m.int64ObservableInsts.Lookup(key, func() (int64Observable, error) {
inst := newInt64Observable(m, id.Kind, id.Name, id.Description, id.Unit)
for _, insert := range m.int64Resolver.inserters {
// Connect the measure functions for instruments in this pipeline with the
// callbacks for this pipeline.
in, err := insert.Instrument(id, insert.readerDefaultAggregation(id.Kind))
if err != nil {
return inst, err
}
// Drop aggregation
if len(in) == 0 {
inst.dropAggregation = true
continue
}
inst.appendMeasures(in)
for _, cback := range callbacks {
inst := int64Observer{measures: in}
fn := cback
insert.addCallback(func(ctx context.Context) error { return fn(ctx, inst) })
}
}
return inst, validateInstrumentName(id.Name)
})
}
// Int64ObservableCounter returns a new instrument identified by name and
// configured with options. The instrument is used to asynchronously record
// increasing int64 measurements once per a measurement collection cycle.
// Only the measurements recorded during the collection cycle are exported.
//
// If Int64ObservableCounter is invoked repeatedly with the same Name,
// Description, and Unit, only the first set of callbacks provided are used.
// Use meter.RegisterCallback and Registration.Unregister to manage callbacks
// if instrumentation can be created multiple times with different callbacks.
func (m *meter) Int64ObservableCounter(name string, options ...metric.Int64ObservableCounterOption) (metric.Int64ObservableCounter, error) {
cfg := metric.NewInt64ObservableCounterConfig(options...)
id := Instrument{
Name: name,
Description: cfg.Description(),
Unit: cfg.Unit(),
Kind: InstrumentKindObservableCounter,
Scope: m.scope,
}
return m.int64ObservableInstrument(id, cfg.Callbacks())
}
// Int64ObservableUpDownCounter returns a new instrument identified by name and
// configured with options. The instrument is used to asynchronously record
// int64 measurements once per a measurement collection cycle. Only the
// measurements recorded during the collection cycle are exported.
func (m *meter) Int64ObservableUpDownCounter(name string, options ...metric.Int64ObservableUpDownCounterOption) (metric.Int64ObservableUpDownCounter, error) {
cfg := metric.NewInt64ObservableUpDownCounterConfig(options...)
id := Instrument{
Name: name,
Description: cfg.Description(),
Unit: cfg.Unit(),
Kind: InstrumentKindObservableUpDownCounter,
Scope: m.scope,
}
return m.int64ObservableInstrument(id, cfg.Callbacks())
}
// Int64ObservableGauge returns a new instrument identified by name and
// configured with options. The instrument is used to asynchronously record
// instantaneous int64 measurements once per a measurement collection cycle.
// Only the measurements recorded during the collection cycle are exported.
func (m *meter) Int64ObservableGauge(name string, options ...metric.Int64ObservableGaugeOption) (metric.Int64ObservableGauge, error) {
cfg := metric.NewInt64ObservableGaugeConfig(options...)
id := Instrument{
Name: name,
Description: cfg.Description(),
Unit: cfg.Unit(),
Kind: InstrumentKindObservableGauge,
Scope: m.scope,
}
return m.int64ObservableInstrument(id, cfg.Callbacks())
}
// Float64Counter returns a new instrument identified by name and configured
// with options. The instrument is used to synchronously record increasing
// float64 measurements during a computational operation.
func (m *meter) Float64Counter(name string, options ...metric.Float64CounterOption) (metric.Float64Counter, error) {
cfg := metric.NewFloat64CounterConfig(options...)
const kind = InstrumentKindCounter
p := float64InstProvider{m}
i, err := p.lookup(kind, name, cfg.Description(), cfg.Unit())
if err != nil {
return i, err
}
return i, validateInstrumentName(name)
}
// Float64UpDownCounter returns a new instrument identified by name and
// configured with options. The instrument is used to synchronously record
// float64 measurements during a computational operation.
func (m *meter) Float64UpDownCounter(name string, options ...metric.Float64UpDownCounterOption) (metric.Float64UpDownCounter, error) {
cfg := metric.NewFloat64UpDownCounterConfig(options...)
const kind = InstrumentKindUpDownCounter
p := float64InstProvider{m}
i, err := p.lookup(kind, name, cfg.Description(), cfg.Unit())
if err != nil {
return i, err
}
return i, validateInstrumentName(name)
}
// Float64Histogram returns a new instrument identified by name and configured
// with options. The instrument is used to synchronously record the
// distribution of float64 measurements during a computational operation.
func (m *meter) Float64Histogram(name string, options ...metric.Float64HistogramOption) (metric.Float64Histogram, error) {
cfg := metric.NewFloat64HistogramConfig(options...)
p := float64InstProvider{m}
i, err := p.lookupHistogram(name, cfg)
if err != nil {
return i, err
}
return i, validateInstrumentName(name)
}
// Float64Gauge returns a new instrument identified by name and configured
// with options. The instrument is used to synchronously record the
// distribution of float64 measurements during a computational operation.
func (m *meter) Float64Gauge(name string, options ...metric.Float64GaugeOption) (metric.Float64Gauge, error) {
cfg := metric.NewFloat64GaugeConfig(options...)
const kind = InstrumentKindGauge
p := float64InstProvider{m}
i, err := p.lookup(kind, name, cfg.Description(), cfg.Unit())
if err != nil {
return i, err
}
return i, validateInstrumentName(name)
}
// float64ObservableInstrument returns a new observable identified by the Instrument.
// It registers callbacks for each reader's pipeline.
func (m *meter) float64ObservableInstrument(id Instrument, callbacks []metric.Float64Callback) (float64Observable, error) {
key := instID{
Name: id.Name,
Description: id.Description,
Unit: id.Unit,
Kind: id.Kind,
}
if m.int64ObservableInsts.HasKey(key) && len(callbacks) > 0 {
warnRepeatedObservableCallbacks(id)
}
return m.float64ObservableInsts.Lookup(key, func() (float64Observable, error) {
inst := newFloat64Observable(m, id.Kind, id.Name, id.Description, id.Unit)
for _, insert := range m.float64Resolver.inserters {
// Connect the measure functions for instruments in this pipeline with the
// callbacks for this pipeline.
in, err := insert.Instrument(id, insert.readerDefaultAggregation(id.Kind))
if err != nil {
return inst, err
}
// Drop aggregation
if len(in) == 0 {
inst.dropAggregation = true
continue
}
inst.appendMeasures(in)
for _, cback := range callbacks {
inst := float64Observer{measures: in}
fn := cback
insert.addCallback(func(ctx context.Context) error { return fn(ctx, inst) })
}
}
return inst, validateInstrumentName(id.Name)
})
}
// Float64ObservableCounter returns a new instrument identified by name and
// configured with options. The instrument is used to asynchronously record
// increasing float64 measurements once per a measurement collection cycle.
// Only the measurements recorded during the collection cycle are exported.
//
// If Float64ObservableCounter is invoked repeatedly with the same Name,
// Description, and Unit, only the first set of callbacks provided are used.
// Use meter.RegisterCallback and Registration.Unregister to manage callbacks
// if instrumentation can be created multiple times with different callbacks.
func (m *meter) Float64ObservableCounter(name string, options ...metric.Float64ObservableCounterOption) (metric.Float64ObservableCounter, error) {
cfg := metric.NewFloat64ObservableCounterConfig(options...)
id := Instrument{
Name: name,
Description: cfg.Description(),
Unit: cfg.Unit(),
Kind: InstrumentKindObservableCounter,
Scope: m.scope,
}
return m.float64ObservableInstrument(id, cfg.Callbacks())
}
// Float64ObservableUpDownCounter returns a new instrument identified by name
// and configured with options. The instrument is used to asynchronously record
// float64 measurements once per a measurement collection cycle. Only the
// measurements recorded during the collection cycle are exported.
func (m *meter) Float64ObservableUpDownCounter(name string, options ...metric.Float64ObservableUpDownCounterOption) (metric.Float64ObservableUpDownCounter, error) {
cfg := metric.NewFloat64ObservableUpDownCounterConfig(options...)
id := Instrument{
Name: name,
Description: cfg.Description(),
Unit: cfg.Unit(),
Kind: InstrumentKindObservableUpDownCounter,
Scope: m.scope,
}
return m.float64ObservableInstrument(id, cfg.Callbacks())
}
// Float64ObservableGauge returns a new instrument identified by name and
// configured with options. The instrument is used to asynchronously record
// instantaneous float64 measurements once per a measurement collection cycle.
// Only the measurements recorded during the collection cycle are exported.
func (m *meter) Float64ObservableGauge(name string, options ...metric.Float64ObservableGaugeOption) (metric.Float64ObservableGauge, error) {
cfg := metric.NewFloat64ObservableGaugeConfig(options...)
id := Instrument{
Name: name,
Description: cfg.Description(),
Unit: cfg.Unit(),
Kind: InstrumentKindObservableGauge,
Scope: m.scope,
}
return m.float64ObservableInstrument(id, cfg.Callbacks())
}
func validateInstrumentName(name string) error {
if len(name) == 0 {
return fmt.Errorf("%w: %s: is empty", ErrInstrumentName, name)
}
if len(name) > 255 {
return fmt.Errorf("%w: %s: longer than 255 characters", ErrInstrumentName, name)
}
if !isAlpha([]rune(name)[0]) {
return fmt.Errorf("%w: %s: must start with a letter", ErrInstrumentName, name)
}
if len(name) == 1 {
return nil
}
for _, c := range name[1:] {
if !isAlphanumeric(c) && c != '_' && c != '.' && c != '-' && c != '/' {
return fmt.Errorf("%w: %s: must only contain [A-Za-z0-9_.-/]", ErrInstrumentName, name)
}
}
return nil
}
func isAlpha(c rune) bool {
return ('a' <= c && c <= 'z') || ('A' <= c && c <= 'Z')
}
func isAlphanumeric(c rune) bool {
return isAlpha(c) || ('0' <= c && c <= '9')
}
func warnRepeatedObservableCallbacks(id Instrument) {
inst := fmt.Sprintf(
"Instrument{Name: %q, Description: %q, Kind: %q, Unit: %q}",
id.Name, id.Description, "InstrumentKind"+id.Kind.String(), id.Unit,
)
global.Warn("Repeated observable instrument creation with callbacks. Ignoring new callbacks. Use meter.RegisterCallback and Registration.Unregister to manage callbacks.",
"instrument", inst,
)
}
// RegisterCallback registers f to be called each collection cycle so it will
// make observations for insts during those cycles.
//
// The only instruments f can make observations for are insts. All other
// observations will be dropped and an error will be logged.
//
// Only instruments from this meter can be registered with f, an error is
// returned if other instrument are provided.
//
// Only observations made in the callback will be exported. Unlike synchronous
// instruments, asynchronous callbacks can "forget" attribute sets that are no
// longer relevant by omitting the observation during the callback.
//
// The returned Registration can be used to unregister f.
func (m *meter) RegisterCallback(f metric.Callback, insts ...metric.Observable) (metric.Registration, error) {
if len(insts) == 0 {
// Don't allocate a observer if not needed.
return noopRegister{}, nil
}
reg := newObserver()
var errs multierror
for _, inst := range insts {
// Unwrap any global.
if u, ok := inst.(interface {
Unwrap() metric.Observable
}); ok {
inst = u.Unwrap()
}
switch o := inst.(type) {
case int64Observable:
if err := o.registerable(m); err != nil {
if !errors.Is(err, errEmptyAgg) {
errs.append(err)
}
continue
}
reg.registerInt64(o.observablID)
case float64Observable:
if err := o.registerable(m); err != nil {
if !errors.Is(err, errEmptyAgg) {
errs.append(err)
}
continue
}
reg.registerFloat64(o.observablID)
default:
// Instrument external to the SDK.
return nil, fmt.Errorf("invalid observable: from different implementation")
}
}
err := errs.errorOrNil()
if reg.len() == 0 {
// All insts use drop aggregation or are invalid.
return noopRegister{}, err
}
// Some or all instruments were valid.
cback := func(ctx context.Context) error { return f(ctx, reg) }
return m.pipes.registerMultiCallback(cback), err
}
type observer struct {
embedded.Observer
float64 map[observablID[float64]]struct{}
int64 map[observablID[int64]]struct{}
}
func newObserver() observer {
return observer{
float64: make(map[observablID[float64]]struct{}),
int64: make(map[observablID[int64]]struct{}),
}
}
func (r observer) len() int {
return len(r.float64) + len(r.int64)
}
func (r observer) registerFloat64(id observablID[float64]) {
r.float64[id] = struct{}{}
}
func (r observer) registerInt64(id observablID[int64]) {
r.int64[id] = struct{}{}
}
var (
errUnknownObserver = errors.New("unknown observable instrument")
errUnregObserver = errors.New("observable instrument not registered for callback")
)
func (r observer) ObserveFloat64(o metric.Float64Observable, v float64, opts ...metric.ObserveOption) {
var oImpl float64Observable
switch conv := o.(type) {
case float64Observable:
oImpl = conv
case interface {
Unwrap() metric.Observable
}:
// Unwrap any global.
async := conv.Unwrap()
var ok bool
if oImpl, ok = async.(float64Observable); !ok {
global.Error(errUnknownObserver, "failed to record asynchronous")
return
}
default:
global.Error(errUnknownObserver, "failed to record")
return
}
if _, registered := r.float64[oImpl.observablID]; !registered {
if !oImpl.dropAggregation {
global.Error(errUnregObserver, "failed to record",
"name", oImpl.name,
"description", oImpl.description,
"unit", oImpl.unit,
"number", fmt.Sprintf("%T", float64(0)),
)
}
return
}
c := metric.NewObserveConfig(opts)
oImpl.observe(v, c.Attributes())
}
func (r observer) ObserveInt64(o metric.Int64Observable, v int64, opts ...metric.ObserveOption) {
var oImpl int64Observable
switch conv := o.(type) {
case int64Observable:
oImpl = conv
case interface {
Unwrap() metric.Observable
}:
// Unwrap any global.
async := conv.Unwrap()
var ok bool
if oImpl, ok = async.(int64Observable); !ok {
global.Error(errUnknownObserver, "failed to record asynchronous")
return
}
default:
global.Error(errUnknownObserver, "failed to record")
return
}
if _, registered := r.int64[oImpl.observablID]; !registered {
if !oImpl.dropAggregation {
global.Error(errUnregObserver, "failed to record",
"name", oImpl.name,
"description", oImpl.description,
"unit", oImpl.unit,
"number", fmt.Sprintf("%T", int64(0)),
)
}
return
}
c := metric.NewObserveConfig(opts)
oImpl.observe(v, c.Attributes())
}
type noopRegister struct{ embedded.Registration }
func (noopRegister) Unregister() error {
return nil
}
// int64InstProvider provides int64 OpenTelemetry instruments.
type int64InstProvider struct{ *meter }
func (p int64InstProvider) aggs(kind InstrumentKind, name, desc, u string) ([]aggregate.Measure[int64], error) {
inst := Instrument{
Name: name,
Description: desc,
Unit: u,
Kind: kind,
Scope: p.scope,
}
return p.int64Resolver.Aggregators(inst)
}
func (p int64InstProvider) histogramAggs(name string, cfg metric.Int64HistogramConfig) ([]aggregate.Measure[int64], error) {
boundaries := cfg.ExplicitBucketBoundaries()
aggError := AggregationExplicitBucketHistogram{Boundaries: boundaries}.err()
if aggError != nil {
// If boundaries are invalid, ignore them.
boundaries = nil
}
inst := Instrument{
Name: name,
Description: cfg.Description(),
Unit: cfg.Unit(),
Kind: InstrumentKindHistogram,
Scope: p.scope,
}
measures, err := p.int64Resolver.HistogramAggregators(inst, boundaries)
return measures, errors.Join(aggError, err)
}
// lookup returns the resolved instrumentImpl.
func (p int64InstProvider) lookup(kind InstrumentKind, name, desc, u string) (*int64Inst, error) {
return p.meter.int64Insts.Lookup(instID{
Name: name,
Description: desc,
Unit: u,
Kind: kind,
}, func() (*int64Inst, error) {
aggs, err := p.aggs(kind, name, desc, u)
return &int64Inst{measures: aggs}, err
})
}
// lookupHistogram returns the resolved instrumentImpl.
func (p int64InstProvider) lookupHistogram(name string, cfg metric.Int64HistogramConfig) (*int64Inst, error) {
return p.meter.int64Insts.Lookup(instID{
Name: name,
Description: cfg.Description(),
Unit: cfg.Unit(),
Kind: InstrumentKindHistogram,
}, func() (*int64Inst, error) {
aggs, err := p.histogramAggs(name, cfg)
return &int64Inst{measures: aggs}, err
})
}
// float64InstProvider provides float64 OpenTelemetry instruments.
type float64InstProvider struct{ *meter }
func (p float64InstProvider) aggs(kind InstrumentKind, name, desc, u string) ([]aggregate.Measure[float64], error) {
inst := Instrument{
Name: name,
Description: desc,
Unit: u,
Kind: kind,
Scope: p.scope,
}
return p.float64Resolver.Aggregators(inst)
}
func (p float64InstProvider) histogramAggs(name string, cfg metric.Float64HistogramConfig) ([]aggregate.Measure[float64], error) {
boundaries := cfg.ExplicitBucketBoundaries()
aggError := AggregationExplicitBucketHistogram{Boundaries: boundaries}.err()
if aggError != nil {
// If boundaries are invalid, ignore them.
boundaries = nil
}
inst := Instrument{
Name: name,
Description: cfg.Description(),
Unit: cfg.Unit(),
Kind: InstrumentKindHistogram,
Scope: p.scope,
}
measures, err := p.float64Resolver.HistogramAggregators(inst, boundaries)
return measures, errors.Join(aggError, err)
}
// lookup returns the resolved instrumentImpl.
func (p float64InstProvider) lookup(kind InstrumentKind, name, desc, u string) (*float64Inst, error) {
return p.meter.float64Insts.Lookup(instID{
Name: name,
Description: desc,
Unit: u,
Kind: kind,
}, func() (*float64Inst, error) {
aggs, err := p.aggs(kind, name, desc, u)
return &float64Inst{measures: aggs}, err
})
}
// lookupHistogram returns the resolved instrumentImpl.
func (p float64InstProvider) lookupHistogram(name string, cfg metric.Float64HistogramConfig) (*float64Inst, error) {
return p.meter.float64Insts.Lookup(instID{
Name: name,
Description: cfg.Description(),
Unit: cfg.Unit(),
Kind: InstrumentKindHistogram,
}, func() (*float64Inst, error) {
aggs, err := p.histogramAggs(name, cfg)
return &float64Inst{measures: aggs}, err
})
}
type int64Observer struct {
embedded.Int64Observer
measures[int64]
}
func (o int64Observer) Observe(val int64, opts ...metric.ObserveOption) {
c := metric.NewObserveConfig(opts)
o.observe(val, c.Attributes())
}
type float64Observer struct {
embedded.Float64Observer
measures[float64]
}
func (o float64Observer) Observe(val float64, opts ...metric.ObserveOption) {
c := metric.NewObserveConfig(opts)
o.observe(val, c.Attributes())
}

View File

@ -0,0 +1,3 @@
# SDK Metric data
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/sdk/metric/metricdata)](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/metric/metricdata)

View File

@ -0,0 +1,296 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metricdata // import "go.opentelemetry.io/otel/sdk/metric/metricdata"
import (
"encoding/json"
"time"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/instrumentation"
"go.opentelemetry.io/otel/sdk/resource"
)
// ResourceMetrics is a collection of ScopeMetrics and the associated Resource
// that created them.
type ResourceMetrics struct {
// Resource represents the entity that collected the metrics.
Resource *resource.Resource
// ScopeMetrics are the collection of metrics with unique Scopes.
ScopeMetrics []ScopeMetrics
}
// ScopeMetrics is a collection of Metrics Produces by a Meter.
type ScopeMetrics struct {
// Scope is the Scope that the Meter was created with.
Scope instrumentation.Scope
// Metrics are a list of aggregations created by the Meter.
Metrics []Metrics
}
// Metrics is a collection of one or more aggregated timeseries from an Instrument.
type Metrics struct {
// Name is the name of the Instrument that created this data.
Name string
// Description is the description of the Instrument, which can be used in documentation.
Description string
// Unit is the unit in which the Instrument reports.
Unit string
// Data is the aggregated data from an Instrument.
Data Aggregation
}
// Aggregation is the store of data reported by an Instrument.
// It will be one of: Gauge, Sum, Histogram.
type Aggregation interface {
privateAggregation()
}
// Gauge represents a measurement of the current value of an instrument.
type Gauge[N int64 | float64] struct {
// DataPoints are the individual aggregated measurements with unique
// Attributes.
DataPoints []DataPoint[N]
}
func (Gauge[N]) privateAggregation() {}
// Sum represents the sum of all measurements of values from an instrument.
type Sum[N int64 | float64] struct {
// DataPoints are the individual aggregated measurements with unique
// Attributes.
DataPoints []DataPoint[N]
// Temporality describes if the aggregation is reported as the change from the
// last report time, or the cumulative changes since a fixed start time.
Temporality Temporality
// IsMonotonic represents if this aggregation only increases or decreases.
IsMonotonic bool
}
func (Sum[N]) privateAggregation() {}
// DataPoint is a single data point in a timeseries.
type DataPoint[N int64 | float64] struct {
// Attributes is the set of key value pairs that uniquely identify the
// timeseries.
Attributes attribute.Set
// StartTime is when the timeseries was started. (optional)
StartTime time.Time `json:",omitempty"`
// Time is the time when the timeseries was recorded. (optional)
Time time.Time `json:",omitempty"`
// Value is the value of this data point.
Value N
// Exemplars is the sampled Exemplars collected during the timeseries.
Exemplars []Exemplar[N] `json:",omitempty"`
}
// Histogram represents the histogram of all measurements of values from an instrument.
type Histogram[N int64 | float64] struct {
// DataPoints are the individual aggregated measurements with unique
// Attributes.
DataPoints []HistogramDataPoint[N]
// Temporality describes if the aggregation is reported as the change from the
// last report time, or the cumulative changes since a fixed start time.
Temporality Temporality
}
func (Histogram[N]) privateAggregation() {}
// HistogramDataPoint is a single histogram data point in a timeseries.
type HistogramDataPoint[N int64 | float64] struct {
// Attributes is the set of key value pairs that uniquely identify the
// timeseries.
Attributes attribute.Set
// StartTime is when the timeseries was started.
StartTime time.Time
// Time is the time when the timeseries was recorded.
Time time.Time
// Count is the number of updates this histogram has been calculated with.
Count uint64
// Bounds are the upper bounds of the buckets of the histogram. Because the
// last boundary is +infinity this one is implied.
Bounds []float64
// BucketCounts is the count of each of the buckets.
BucketCounts []uint64
// Min is the minimum value recorded. (optional)
Min Extrema[N]
// Max is the maximum value recorded. (optional)
Max Extrema[N]
// Sum is the sum of the values recorded.
Sum N
// Exemplars is the sampled Exemplars collected during the timeseries.
Exemplars []Exemplar[N] `json:",omitempty"`
}
// ExponentialHistogram represents the histogram of all measurements of values from an instrument.
type ExponentialHistogram[N int64 | float64] struct {
// DataPoints are the individual aggregated measurements with unique
// attributes.
DataPoints []ExponentialHistogramDataPoint[N]
// Temporality describes if the aggregation is reported as the change from the
// last report time, or the cumulative changes since a fixed start time.
Temporality Temporality
}
func (ExponentialHistogram[N]) privateAggregation() {}
// ExponentialHistogramDataPoint is a single exponential histogram data point in a timeseries.
type ExponentialHistogramDataPoint[N int64 | float64] struct {
// Attributes is the set of key value pairs that uniquely identify the
// timeseries.
Attributes attribute.Set
// StartTime is when the timeseries was started.
StartTime time.Time
// Time is the time when the timeseries was recorded.
Time time.Time
// Count is the number of updates this histogram has been calculated with.
Count uint64
// Min is the minimum value recorded. (optional)
Min Extrema[N]
// Max is the maximum value recorded. (optional)
Max Extrema[N]
// Sum is the sum of the values recorded.
Sum N
// Scale describes the resolution of the histogram. Boundaries are
// located at powers of the base, where:
//
// base = 2 ^ (2 ^ -Scale)
Scale int32
// ZeroCount is the number of values whose absolute value
// is less than or equal to [ZeroThreshold].
// When ZeroThreshold is 0, this is the number of values that
// cannot be expressed using the standard exponential formula
// as well as values that have been rounded to zero.
// ZeroCount represents the special zero count bucket.
ZeroCount uint64
// PositiveBucket is range of positive value bucket counts.
PositiveBucket ExponentialBucket
// NegativeBucket is range of negative value bucket counts.
NegativeBucket ExponentialBucket
// ZeroThreshold is the width of the zero region. Where the zero region is
// defined as the closed interval [-ZeroThreshold, ZeroThreshold].
ZeroThreshold float64
// Exemplars is the sampled Exemplars collected during the timeseries.
Exemplars []Exemplar[N] `json:",omitempty"`
}
// ExponentialBucket are a set of bucket counts, encoded in a contiguous array
// of counts.
type ExponentialBucket struct {
// Offset is the bucket index of the first entry in the Counts slice.
Offset int32
// Counts is an slice where Counts[i] carries the count of the bucket at
// index (Offset+i). Counts[i] is the count of values greater than
// base^(Offset+i) and less than or equal to base^(Offset+i+1).
Counts []uint64
}
// Extrema is the minimum or maximum value of a dataset.
type Extrema[N int64 | float64] struct {
value N
valid bool
}
// MarshalText converts the Extrema value to text.
func (e Extrema[N]) MarshalText() ([]byte, error) {
if !e.valid {
return json.Marshal(nil)
}
return json.Marshal(e.value)
}
// MarshalJSON converts the Extrema value to JSON number.
func (e *Extrema[N]) MarshalJSON() ([]byte, error) {
return e.MarshalText()
}
// NewExtrema returns an Extrema set to v.
func NewExtrema[N int64 | float64](v N) Extrema[N] {
return Extrema[N]{value: v, valid: true}
}
// Value returns the Extrema value and true if the Extrema is defined.
// Otherwise, if the Extrema is its zero-value, defined will be false.
func (e Extrema[N]) Value() (v N, defined bool) {
return e.value, e.valid
}
// Exemplar is a measurement sampled from a timeseries providing a typical
// example.
type Exemplar[N int64 | float64] struct {
// FilteredAttributes are the attributes recorded with the measurement but
// filtered out of the timeseries' aggregated data.
FilteredAttributes []attribute.KeyValue
// Time is the time when the measurement was recorded.
Time time.Time
// Value is the measured value.
Value N
// SpanID is the ID of the span that was active during the measurement. If
// no span was active or the span was not sampled this will be empty.
SpanID []byte `json:",omitempty"`
// TraceID is the ID of the trace the active span belonged to during the
// measurement. If no span was active or the span was not sampled this will
// be empty.
TraceID []byte `json:",omitempty"`
}
// Summary metric data are used to convey quantile summaries,
// a Prometheus (see: https://prometheus.io/docs/concepts/metric_types/#summary)
// data type.
//
// These data points cannot always be merged in a meaningful way. The Summary
// type is only used by bridges from other metrics libraries, and cannot be
// produced using OpenTelemetry instrumentation.
type Summary struct {
// DataPoints are the individual aggregated measurements with unique
// attributes.
DataPoints []SummaryDataPoint
}
func (Summary) privateAggregation() {}
// SummaryDataPoint is a single data point in a timeseries that describes the
// time-varying values of a Summary metric.
type SummaryDataPoint struct {
// Attributes is the set of key value pairs that uniquely identify the
// timeseries.
Attributes attribute.Set
// StartTime is when the timeseries was started.
StartTime time.Time
// Time is the time when the timeseries was recorded.
Time time.Time
// Count is the number of updates this summary has been calculated with.
Count uint64
// Sum is the sum of the values recorded.
Sum float64
// (Optional) list of values at different quantiles of the distribution calculated
// from the current snapshot. The quantiles must be strictly increasing.
QuantileValues []QuantileValue
}
// QuantileValue is the value at a given quantile of a summary.
type QuantileValue struct {
// Quantile is the quantile of this value.
//
// Must be in the interval [0.0, 1.0].
Quantile float64
// Value is the value at the given quantile of a summary.
//
// Quantile values must NOT be negative.
Value float64
}

View File

@ -0,0 +1,30 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:generate stringer -type=Temporality
package metricdata // import "go.opentelemetry.io/otel/sdk/metric/metricdata"
// Temporality defines the window that an aggregation was calculated over.
type Temporality uint8
const (
// undefinedTemporality represents an unset Temporality.
//nolint:deadcode,unused,varcheck
undefinedTemporality Temporality = iota
// CumulativeTemporality defines a measurement interval that continues to
// expand forward in time from a starting point. New measurements are
// added to all previous measurements since a start time.
CumulativeTemporality
// DeltaTemporality defines a measurement interval that resets each cycle.
// Measurements from one cycle are recorded independently, measurements
// from other cycles do not affect them.
DeltaTemporality
)
// MarshalText returns the byte encoded of t.
func (t Temporality) MarshalText() ([]byte, error) {
return []byte(t.String()), nil
}

View File

@ -0,0 +1,25 @@
// Code generated by "stringer -type=Temporality"; DO NOT EDIT.
package metricdata
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[undefinedTemporality-0]
_ = x[CumulativeTemporality-1]
_ = x[DeltaTemporality-2]
}
const _Temporality_name = "undefinedTemporalityCumulativeTemporalityDeltaTemporality"
var _Temporality_index = [...]uint8{0, 20, 41, 57}
func (i Temporality) String() string {
if i >= Temporality(len(_Temporality_index)-1) {
return "Temporality(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _Temporality_name[_Temporality_index[i]:_Temporality_index[i+1]]
}

View File

@ -0,0 +1,370 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"context"
"errors"
"fmt"
"sync"
"sync/atomic"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
// Default periodic reader timing.
const (
defaultTimeout = time.Millisecond * 30000
defaultInterval = time.Millisecond * 60000
)
// periodicReaderConfig contains configuration options for a PeriodicReader.
type periodicReaderConfig struct {
interval time.Duration
timeout time.Duration
producers []Producer
}
// newPeriodicReaderConfig returns a periodicReaderConfig configured with
// options.
func newPeriodicReaderConfig(options []PeriodicReaderOption) periodicReaderConfig {
c := periodicReaderConfig{
interval: envDuration(envInterval, defaultInterval),
timeout: envDuration(envTimeout, defaultTimeout),
}
for _, o := range options {
c = o.applyPeriodic(c)
}
return c
}
// PeriodicReaderOption applies a configuration option value to a PeriodicReader.
type PeriodicReaderOption interface {
applyPeriodic(periodicReaderConfig) periodicReaderConfig
}
// periodicReaderOptionFunc applies a set of options to a periodicReaderConfig.
type periodicReaderOptionFunc func(periodicReaderConfig) periodicReaderConfig
// applyPeriodic returns a periodicReaderConfig with option(s) applied.
func (o periodicReaderOptionFunc) applyPeriodic(conf periodicReaderConfig) periodicReaderConfig {
return o(conf)
}
// WithTimeout configures the time a PeriodicReader waits for an export to
// complete before canceling it. This includes an export which occurs as part
// of Shutdown or ForceFlush if the user passed context does not have a
// deadline. If the user passed context does have a deadline, it will be used
// instead.
//
// This option overrides any value set for the
// OTEL_METRIC_EXPORT_TIMEOUT environment variable.
//
// If this option is not used or d is less than or equal to zero, 30 seconds
// is used as the default.
func WithTimeout(d time.Duration) PeriodicReaderOption {
return periodicReaderOptionFunc(func(conf periodicReaderConfig) periodicReaderConfig {
if d <= 0 {
return conf
}
conf.timeout = d
return conf
})
}
// WithInterval configures the intervening time between exports for a
// PeriodicReader.
//
// This option overrides any value set for the
// OTEL_METRIC_EXPORT_INTERVAL environment variable.
//
// If this option is not used or d is less than or equal to zero, 60 seconds
// is used as the default.
func WithInterval(d time.Duration) PeriodicReaderOption {
return periodicReaderOptionFunc(func(conf periodicReaderConfig) periodicReaderConfig {
if d <= 0 {
return conf
}
conf.interval = d
return conf
})
}
// NewPeriodicReader returns a Reader that collects and exports metric data to
// the exporter at a defined interval. By default, the returned Reader will
// collect and export data every 60 seconds, and will cancel any attempts that
// exceed 30 seconds, collect and export combined. The collect and export time
// are not counted towards the interval between attempts.
//
// The Collect method of the returned Reader continues to gather and return
// metric data to the user. It will not automatically send that data to the
// exporter. That is left to the user to accomplish.
func NewPeriodicReader(exporter Exporter, options ...PeriodicReaderOption) *PeriodicReader {
conf := newPeriodicReaderConfig(options)
ctx, cancel := context.WithCancel(context.Background())
r := &PeriodicReader{
interval: conf.interval,
timeout: conf.timeout,
exporter: exporter,
flushCh: make(chan chan error),
cancel: cancel,
done: make(chan struct{}),
rmPool: sync.Pool{
New: func() interface{} {
return &metricdata.ResourceMetrics{}
},
},
}
r.externalProducers.Store(conf.producers)
go func() {
defer func() { close(r.done) }()
r.run(ctx, conf.interval)
}()
return r
}
// PeriodicReader is a Reader that continuously collects and exports metric
// data at a set interval.
type PeriodicReader struct {
sdkProducer atomic.Value
mu sync.Mutex
isShutdown bool
externalProducers atomic.Value
interval time.Duration
timeout time.Duration
exporter Exporter
flushCh chan chan error
done chan struct{}
cancel context.CancelFunc
shutdownOnce sync.Once
rmPool sync.Pool
}
// Compile time check the periodicReader implements Reader and is comparable.
var _ = map[Reader]struct{}{&PeriodicReader{}: {}}
// newTicker allows testing override.
var newTicker = time.NewTicker
// run continuously collects and exports metric data at the specified
// interval. This will run until ctx is canceled or times out.
func (r *PeriodicReader) run(ctx context.Context, interval time.Duration) {
ticker := newTicker(interval)
defer ticker.Stop()
for {
select {
case <-ticker.C:
err := r.collectAndExport(ctx)
if err != nil {
otel.Handle(err)
}
case errCh := <-r.flushCh:
errCh <- r.collectAndExport(ctx)
ticker.Reset(interval)
case <-ctx.Done():
return
}
}
}
// register registers p as the producer of this reader.
func (r *PeriodicReader) register(p sdkProducer) {
// Only register once. If producer is already set, do nothing.
if !r.sdkProducer.CompareAndSwap(nil, produceHolder{produce: p.produce}) {
msg := "did not register periodic reader"
global.Error(errDuplicateRegister, msg)
}
}
// temporality reports the Temporality for the instrument kind provided.
func (r *PeriodicReader) temporality(kind InstrumentKind) metricdata.Temporality {
return r.exporter.Temporality(kind)
}
// aggregation returns what Aggregation to use for kind.
func (r *PeriodicReader) aggregation(kind InstrumentKind) Aggregation { // nolint:revive // import-shadow for method scoped by type.
return r.exporter.Aggregation(kind)
}
// collectAndExport gather all metric data related to the periodicReader r from
// the SDK and exports it with r's exporter.
func (r *PeriodicReader) collectAndExport(ctx context.Context) error {
ctx, cancel := context.WithTimeout(ctx, r.timeout)
defer cancel()
// TODO (#3047): Use a sync.Pool or persistent pointer instead of allocating rm every Collect.
rm := r.rmPool.Get().(*metricdata.ResourceMetrics)
err := r.Collect(ctx, rm)
if err == nil {
err = r.export(ctx, rm)
}
r.rmPool.Put(rm)
return err
}
// Collect gathers all metric data related to the Reader from
// the SDK and other Producers and stores the result in rm. The metric
// data is not exported to the configured exporter, it is left to the caller to
// handle that if desired.
//
// Collect will return an error if called after shutdown.
// Collect will return an error if rm is a nil ResourceMetrics.
// Collect will return an error if the context's Done channel is closed.
//
// This method is safe to call concurrently.
func (r *PeriodicReader) Collect(ctx context.Context, rm *metricdata.ResourceMetrics) error {
if rm == nil {
return errors.New("periodic reader: *metricdata.ResourceMetrics is nil")
}
// TODO (#3047): When collect is updated to accept output as param, pass rm.
return r.collect(ctx, r.sdkProducer.Load(), rm)
}
// collect unwraps p as a produceHolder and returns its produce results.
func (r *PeriodicReader) collect(ctx context.Context, p interface{}, rm *metricdata.ResourceMetrics) error {
if p == nil {
return ErrReaderNotRegistered
}
ph, ok := p.(produceHolder)
if !ok {
// The atomic.Value is entirely in the periodicReader's control so
// this should never happen. In the unforeseen case that this does
// happen, return an error instead of panicking so a users code does
// not halt in the processes.
err := fmt.Errorf("periodic reader: invalid producer: %T", p)
return err
}
err := ph.produce(ctx, rm)
if err != nil {
return err
}
var errs []error
for _, producer := range r.externalProducers.Load().([]Producer) {
externalMetrics, err := producer.Produce(ctx)
if err != nil {
errs = append(errs, err)
}
rm.ScopeMetrics = append(rm.ScopeMetrics, externalMetrics...)
}
global.Debug("PeriodicReader collection", "Data", rm)
return unifyErrors(errs)
}
// export exports metric data m using r's exporter.
func (r *PeriodicReader) export(ctx context.Context, m *metricdata.ResourceMetrics) error {
return r.exporter.Export(ctx, m)
}
// ForceFlush flushes pending telemetry.
//
// This method is safe to call concurrently.
func (r *PeriodicReader) ForceFlush(ctx context.Context) error {
// Prioritize the ctx timeout if it is set.
if _, ok := ctx.Deadline(); !ok {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, r.timeout)
defer cancel()
}
errCh := make(chan error, 1)
select {
case r.flushCh <- errCh:
select {
case err := <-errCh:
if err != nil {
return err
}
close(errCh)
case <-ctx.Done():
return ctx.Err()
}
case <-r.done:
return ErrReaderShutdown
case <-ctx.Done():
return ctx.Err()
}
return r.exporter.ForceFlush(ctx)
}
// Shutdown flushes pending telemetry and then stops the export pipeline.
//
// This method is safe to call concurrently.
func (r *PeriodicReader) Shutdown(ctx context.Context) error {
err := ErrReaderShutdown
r.shutdownOnce.Do(func() {
// Prioritize the ctx timeout if it is set.
if _, ok := ctx.Deadline(); !ok {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, r.timeout)
defer cancel()
}
// Stop the run loop.
r.cancel()
<-r.done
// Any future call to Collect will now return ErrReaderShutdown.
ph := r.sdkProducer.Swap(produceHolder{
produce: shutdownProducer{}.produce,
})
if ph != nil { // Reader was registered.
// Flush pending telemetry.
m := r.rmPool.Get().(*metricdata.ResourceMetrics)
err = r.collect(ctx, ph, m)
if err == nil {
err = r.export(ctx, m)
}
r.rmPool.Put(m)
}
sErr := r.exporter.Shutdown(ctx)
if err == nil || errors.Is(err, ErrReaderShutdown) {
err = sErr
}
r.mu.Lock()
defer r.mu.Unlock()
r.isShutdown = true
// release references to Producer(s)
r.externalProducers.Store([]Producer{})
})
return err
}
// MarshalLog returns logging data about the PeriodicReader.
func (r *PeriodicReader) MarshalLog() interface{} {
r.mu.Lock()
down := r.isShutdown
r.mu.Unlock()
return struct {
Type string
Exporter Exporter
Registered bool
Shutdown bool
Interval time.Duration
Timeout time.Duration
}{
Type: "PeriodicReader",
Exporter: r.exporter,
Registered: r.sdkProducer.Load() != nil,
Shutdown: down,
Interval: r.interval,
Timeout: r.timeout,
}
}

655
vendor/go.opentelemetry.io/otel/sdk/metric/pipeline.go generated vendored Normal file
View File

@ -0,0 +1,655 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"container/list"
"context"
"errors"
"fmt"
"strings"
"sync"
"sync/atomic"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/metric/embedded"
"go.opentelemetry.io/otel/sdk/instrumentation"
"go.opentelemetry.io/otel/sdk/metric/internal"
"go.opentelemetry.io/otel/sdk/metric/internal/aggregate"
"go.opentelemetry.io/otel/sdk/metric/internal/x"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
"go.opentelemetry.io/otel/sdk/resource"
)
var (
errCreatingAggregators = errors.New("could not create all aggregators")
errIncompatibleAggregation = errors.New("incompatible aggregation")
errUnknownAggregation = errors.New("unrecognized aggregation")
)
// instrumentSync is a synchronization point between a pipeline and an
// instrument's aggregate function.
type instrumentSync struct {
name string
description string
unit string
compAgg aggregate.ComputeAggregation
}
func newPipeline(res *resource.Resource, reader Reader, views []View) *pipeline {
if res == nil {
res = resource.Empty()
}
return &pipeline{
resource: res,
reader: reader,
views: views,
// aggregations is lazy allocated when needed.
}
}
// pipeline connects all of the instruments created by a meter provider to a Reader.
// This is the object that will be `Reader.register()` when a meter provider is created.
//
// As instruments are created the instrument should be checked if it exists in
// the views of a the Reader, and if so each aggregate function should be added
// to the pipeline.
type pipeline struct {
resource *resource.Resource
reader Reader
views []View
sync.Mutex
aggregations map[instrumentation.Scope][]instrumentSync
callbacks []func(context.Context) error
multiCallbacks list.List
}
// addSync adds the instrumentSync to pipeline p with scope. This method is not
// idempotent. Duplicate calls will result in duplicate additions, it is the
// callers responsibility to ensure this is called with unique values.
func (p *pipeline) addSync(scope instrumentation.Scope, iSync instrumentSync) {
p.Lock()
defer p.Unlock()
if p.aggregations == nil {
p.aggregations = map[instrumentation.Scope][]instrumentSync{
scope: {iSync},
}
return
}
p.aggregations[scope] = append(p.aggregations[scope], iSync)
}
type multiCallback func(context.Context) error
// addMultiCallback registers a multi-instrument callback to be run when
// `produce()` is called.
func (p *pipeline) addMultiCallback(c multiCallback) (unregister func()) {
p.Lock()
defer p.Unlock()
e := p.multiCallbacks.PushBack(c)
return func() {
p.Lock()
p.multiCallbacks.Remove(e)
p.Unlock()
}
}
// produce returns aggregated metrics from a single collection.
//
// This method is safe to call concurrently.
func (p *pipeline) produce(ctx context.Context, rm *metricdata.ResourceMetrics) error {
p.Lock()
defer p.Unlock()
var errs multierror
for _, c := range p.callbacks {
// TODO make the callbacks parallel. ( #3034 )
if err := c(ctx); err != nil {
errs.append(err)
}
if err := ctx.Err(); err != nil {
rm.Resource = nil
rm.ScopeMetrics = rm.ScopeMetrics[:0]
return err
}
}
for e := p.multiCallbacks.Front(); e != nil; e = e.Next() {
// TODO make the callbacks parallel. ( #3034 )
f := e.Value.(multiCallback)
if err := f(ctx); err != nil {
errs.append(err)
}
if err := ctx.Err(); err != nil {
// This means the context expired before we finished running callbacks.
rm.Resource = nil
rm.ScopeMetrics = rm.ScopeMetrics[:0]
return err
}
}
rm.Resource = p.resource
rm.ScopeMetrics = internal.ReuseSlice(rm.ScopeMetrics, len(p.aggregations))
i := 0
for scope, instruments := range p.aggregations {
rm.ScopeMetrics[i].Metrics = internal.ReuseSlice(rm.ScopeMetrics[i].Metrics, len(instruments))
j := 0
for _, inst := range instruments {
data := rm.ScopeMetrics[i].Metrics[j].Data
if n := inst.compAgg(&data); n > 0 {
rm.ScopeMetrics[i].Metrics[j].Name = inst.name
rm.ScopeMetrics[i].Metrics[j].Description = inst.description
rm.ScopeMetrics[i].Metrics[j].Unit = inst.unit
rm.ScopeMetrics[i].Metrics[j].Data = data
j++
}
}
rm.ScopeMetrics[i].Metrics = rm.ScopeMetrics[i].Metrics[:j]
if len(rm.ScopeMetrics[i].Metrics) > 0 {
rm.ScopeMetrics[i].Scope = scope
i++
}
}
rm.ScopeMetrics = rm.ScopeMetrics[:i]
return errs.errorOrNil()
}
// inserter facilitates inserting of new instruments from a single scope into a
// pipeline.
type inserter[N int64 | float64] struct {
// aggregators is a cache that holds aggregate function inputs whose
// outputs have been inserted into the underlying reader pipeline. This
// cache ensures no duplicate aggregate functions are inserted into the
// reader pipeline and if a new request during an instrument creation asks
// for the same aggregate function input the same instance is returned.
aggregators *cache[instID, aggVal[N]]
// views is a cache that holds instrument identifiers for all the
// instruments a Meter has created, it is provided from the Meter that owns
// this inserter. This cache ensures during the creation of instruments
// with the same name but different options (e.g. description, unit) a
// warning message is logged.
views *cache[string, instID]
pipeline *pipeline
}
func newInserter[N int64 | float64](p *pipeline, vc *cache[string, instID]) *inserter[N] {
if vc == nil {
vc = &cache[string, instID]{}
}
return &inserter[N]{
aggregators: &cache[instID, aggVal[N]]{},
views: vc,
pipeline: p,
}
}
// Instrument inserts the instrument inst with instUnit into a pipeline. All
// views the pipeline contains are matched against, and any matching view that
// creates a unique aggregate function will have its output inserted into the
// pipeline and its input included in the returned slice.
//
// The returned aggregate function inputs are ensured to be deduplicated and
// unique. If another view in another pipeline that is cached by this
// inserter's cache has already inserted the same aggregate function for the
// same instrument, that functions input instance is returned.
//
// If another instrument has already been inserted by this inserter, or any
// other using the same cache, and it conflicts with the instrument being
// inserted in this call, an aggregate function input matching the arguments
// will still be returned but an Info level log message will also be logged to
// the OTel global logger.
//
// If the passed instrument would result in an incompatible aggregate function,
// an error is returned and that aggregate function output is not inserted nor
// is its input returned.
//
// If an instrument is determined to use a Drop aggregation, that instrument is
// not inserted nor returned.
func (i *inserter[N]) Instrument(inst Instrument, readerAggregation Aggregation) ([]aggregate.Measure[N], error) {
var (
matched bool
measures []aggregate.Measure[N]
)
errs := &multierror{wrapped: errCreatingAggregators}
seen := make(map[uint64]struct{})
for _, v := range i.pipeline.views {
stream, match := v(inst)
if !match {
continue
}
matched = true
in, id, err := i.cachedAggregator(inst.Scope, inst.Kind, stream, readerAggregation)
if err != nil {
errs.append(err)
}
if in == nil { // Drop aggregation.
continue
}
if _, ok := seen[id]; ok {
// This aggregate function has already been added.
continue
}
seen[id] = struct{}{}
measures = append(measures, in)
}
if matched {
return measures, errs.errorOrNil()
}
// Apply implicit default view if no explicit matched.
stream := Stream{
Name: inst.Name,
Description: inst.Description,
Unit: inst.Unit,
}
in, _, err := i.cachedAggregator(inst.Scope, inst.Kind, stream, readerAggregation)
if err != nil {
errs.append(err)
}
if in != nil {
// Ensured to have not seen given matched was false.
measures = append(measures, in)
}
return measures, errs.errorOrNil()
}
// addCallback registers a single instrument callback to be run when
// `produce()` is called.
func (i *inserter[N]) addCallback(cback func(context.Context) error) {
i.pipeline.Lock()
defer i.pipeline.Unlock()
i.pipeline.callbacks = append(i.pipeline.callbacks, cback)
}
var aggIDCount uint64
// aggVal is the cached value in an aggregators cache.
type aggVal[N int64 | float64] struct {
ID uint64
Measure aggregate.Measure[N]
Err error
}
// readerDefaultAggregation returns the default aggregation for the instrument
// kind based on the reader's aggregation preferences. This is used unless the
// aggregation is overridden with a view.
func (i *inserter[N]) readerDefaultAggregation(kind InstrumentKind) Aggregation {
aggregation := i.pipeline.reader.aggregation(kind)
switch aggregation.(type) {
case nil, AggregationDefault:
// If the reader returns default or nil use the default selector.
aggregation = DefaultAggregationSelector(kind)
default:
// Deep copy and validate before using.
aggregation = aggregation.copy()
if err := aggregation.err(); err != nil {
orig := aggregation
aggregation = DefaultAggregationSelector(kind)
global.Error(
err, "using default aggregation instead",
"aggregation", orig,
"replacement", aggregation,
)
}
}
return aggregation
}
// cachedAggregator returns the appropriate aggregate input and output
// functions for an instrument configuration. If the exact instrument has been
// created within the inst.Scope, those aggregate function instances will be
// returned. Otherwise, new computed aggregate functions will be cached and
// returned.
//
// If the instrument configuration conflicts with an instrument that has
// already been created (e.g. description, unit, data type) a warning will be
// logged at the "Info" level with the global OTel logger. Valid new aggregate
// functions for the instrument configuration will still be returned without an
// error.
//
// If the instrument defines an unknown or incompatible aggregation, an error
// is returned.
func (i *inserter[N]) cachedAggregator(scope instrumentation.Scope, kind InstrumentKind, stream Stream, readerAggregation Aggregation) (meas aggregate.Measure[N], aggID uint64, err error) {
switch stream.Aggregation.(type) {
case nil:
// The aggregation was not overridden with a view. Use the aggregation
// provided by the reader.
stream.Aggregation = readerAggregation
case AggregationDefault:
// The view explicitly requested the default aggregation.
stream.Aggregation = DefaultAggregationSelector(kind)
}
if err := isAggregatorCompatible(kind, stream.Aggregation); err != nil {
return nil, 0, fmt.Errorf(
"creating aggregator with instrumentKind: %d, aggregation %v: %w",
kind, stream.Aggregation, err,
)
}
id := i.instID(kind, stream)
// If there is a conflict, the specification says the view should
// still be applied and a warning should be logged.
i.logConflict(id)
// If there are requests for the same instrument with different name
// casing, the first-seen needs to be returned. Use a normalize ID for the
// cache lookup to ensure the correct comparison.
normID := id.normalize()
cv := i.aggregators.Lookup(normID, func() aggVal[N] {
b := aggregate.Builder[N]{
Temporality: i.pipeline.reader.temporality(kind),
ReservoirFunc: reservoirFunc[N](stream.Aggregation),
}
b.Filter = stream.AttributeFilter
// A value less than or equal to zero will disable the aggregation
// limits for the builder (an all the created aggregates).
// CardinalityLimit.Lookup returns 0 by default if unset (or
// unrecognized input). Use that value directly.
b.AggregationLimit, _ = x.CardinalityLimit.Lookup()
in, out, err := i.aggregateFunc(b, stream.Aggregation, kind)
if err != nil {
return aggVal[N]{0, nil, err}
}
if in == nil { // Drop aggregator.
return aggVal[N]{0, nil, nil}
}
i.pipeline.addSync(scope, instrumentSync{
// Use the first-seen name casing for this and all subsequent
// requests of this instrument.
name: stream.Name,
description: stream.Description,
unit: stream.Unit,
compAgg: out,
})
id := atomic.AddUint64(&aggIDCount, 1)
return aggVal[N]{id, in, err}
})
return cv.Measure, cv.ID, cv.Err
}
// logConflict validates if an instrument with the same case-insensitive name
// as id has already been created. If that instrument conflicts with id, a
// warning is logged.
func (i *inserter[N]) logConflict(id instID) {
// The API specification defines names as case-insensitive. If there is a
// different casing of a name it needs to be a conflict.
name := id.normalize().Name
existing := i.views.Lookup(name, func() instID { return id })
if id == existing {
return
}
const msg = "duplicate metric stream definitions"
args := []interface{}{
"names", fmt.Sprintf("%q, %q", existing.Name, id.Name),
"descriptions", fmt.Sprintf("%q, %q", existing.Description, id.Description),
"kinds", fmt.Sprintf("%s, %s", existing.Kind, id.Kind),
"units", fmt.Sprintf("%s, %s", existing.Unit, id.Unit),
"numbers", fmt.Sprintf("%s, %s", existing.Number, id.Number),
}
// The specification recommends logging a suggested view to resolve
// conflicts if possible.
//
// https://github.com/open-telemetry/opentelemetry-specification/blob/v1.21.0/specification/metrics/sdk.md#duplicate-instrument-registration
if id.Unit != existing.Unit || id.Number != existing.Number {
// There is no view resolution for these, don't make a suggestion.
global.Warn(msg, args...)
return
}
var stream string
if id.Name != existing.Name || id.Kind != existing.Kind {
stream = `Stream{Name: "{{NEW_NAME}}"}`
} else if id.Description != existing.Description {
stream = fmt.Sprintf("Stream{Description: %q}", existing.Description)
}
inst := fmt.Sprintf(
"Instrument{Name: %q, Description: %q, Kind: %q, Unit: %q}",
id.Name, id.Description, "InstrumentKind"+id.Kind.String(), id.Unit,
)
args = append(args, "suggested.view", fmt.Sprintf("NewView(%s, %s)", inst, stream))
global.Warn(msg, args...)
}
func (i *inserter[N]) instID(kind InstrumentKind, stream Stream) instID {
var zero N
return instID{
Name: stream.Name,
Description: stream.Description,
Unit: stream.Unit,
Kind: kind,
Number: fmt.Sprintf("%T", zero),
}
}
// aggregateFunc returns new aggregate functions matching agg, kind, and
// monotonic. If the agg is unknown or temporality is invalid, an error is
// returned.
func (i *inserter[N]) aggregateFunc(b aggregate.Builder[N], agg Aggregation, kind InstrumentKind) (meas aggregate.Measure[N], comp aggregate.ComputeAggregation, err error) {
switch a := agg.(type) {
case AggregationDefault:
return i.aggregateFunc(b, DefaultAggregationSelector(kind), kind)
case AggregationDrop:
// Return nil in and out to signify the drop aggregator.
case AggregationLastValue:
switch kind {
case InstrumentKindGauge:
meas, comp = b.LastValue()
case InstrumentKindObservableGauge:
meas, comp = b.PrecomputedLastValue()
}
case AggregationSum:
switch kind {
case InstrumentKindObservableCounter:
meas, comp = b.PrecomputedSum(true)
case InstrumentKindObservableUpDownCounter:
meas, comp = b.PrecomputedSum(false)
case InstrumentKindCounter, InstrumentKindHistogram:
meas, comp = b.Sum(true)
default:
// InstrumentKindUpDownCounter, InstrumentKindObservableGauge, and
// instrumentKindUndefined or other invalid instrument kinds.
meas, comp = b.Sum(false)
}
case AggregationExplicitBucketHistogram:
var noSum bool
switch kind {
case InstrumentKindUpDownCounter, InstrumentKindObservableUpDownCounter, InstrumentKindObservableGauge, InstrumentKindGauge:
// The sum should not be collected for any instrument that can make
// negative measurements:
// https://github.com/open-telemetry/opentelemetry-specification/blob/v1.21.0/specification/metrics/sdk.md#histogram-aggregations
noSum = true
}
meas, comp = b.ExplicitBucketHistogram(a.Boundaries, a.NoMinMax, noSum)
case AggregationBase2ExponentialHistogram:
var noSum bool
switch kind {
case InstrumentKindUpDownCounter, InstrumentKindObservableUpDownCounter, InstrumentKindObservableGauge, InstrumentKindGauge:
// The sum should not be collected for any instrument that can make
// negative measurements:
// https://github.com/open-telemetry/opentelemetry-specification/blob/v1.21.0/specification/metrics/sdk.md#histogram-aggregations
noSum = true
}
meas, comp = b.ExponentialBucketHistogram(a.MaxSize, a.MaxScale, a.NoMinMax, noSum)
default:
err = errUnknownAggregation
}
return meas, comp, err
}
// isAggregatorCompatible checks if the aggregation can be used by the instrument.
// Current compatibility:
//
// | Instrument Kind | Drop | LastValue | Sum | Histogram | Exponential Histogram |
// |--------------------------|------|-----------|-----|-----------|-----------------------|
// | Counter | ✓ | | ✓ | ✓ | ✓ |
// | UpDownCounter | ✓ | | ✓ | ✓ | ✓ |
// | Histogram | ✓ | | ✓ | ✓ | ✓ |
// | Gauge | ✓ | ✓ | | ✓ | ✓ |
// | Observable Counter | ✓ | | ✓ | ✓ | ✓ |
// | Observable UpDownCounter | ✓ | | ✓ | ✓ | ✓ |
// | Observable Gauge | ✓ | ✓ | | ✓ | ✓ |.
func isAggregatorCompatible(kind InstrumentKind, agg Aggregation) error {
switch agg.(type) {
case AggregationDefault:
return nil
case AggregationExplicitBucketHistogram, AggregationBase2ExponentialHistogram:
switch kind {
case InstrumentKindCounter,
InstrumentKindUpDownCounter,
InstrumentKindHistogram,
InstrumentKindGauge,
InstrumentKindObservableCounter,
InstrumentKindObservableUpDownCounter,
InstrumentKindObservableGauge:
return nil
default:
return errIncompatibleAggregation
}
case AggregationSum:
switch kind {
case InstrumentKindObservableCounter, InstrumentKindObservableUpDownCounter, InstrumentKindCounter, InstrumentKindHistogram, InstrumentKindUpDownCounter:
return nil
default:
// TODO: review need for aggregation check after
// https://github.com/open-telemetry/opentelemetry-specification/issues/2710
return errIncompatibleAggregation
}
case AggregationLastValue:
switch kind {
case InstrumentKindObservableGauge, InstrumentKindGauge:
return nil
}
// TODO: review need for aggregation check after
// https://github.com/open-telemetry/opentelemetry-specification/issues/2710
return errIncompatibleAggregation
case AggregationDrop:
return nil
default:
// This is used passed checking for default, it should be an error at this point.
return fmt.Errorf("%w: %v", errUnknownAggregation, agg)
}
}
// pipelines is the group of pipelines connecting Readers with instrument
// measurement.
type pipelines []*pipeline
func newPipelines(res *resource.Resource, readers []Reader, views []View) pipelines {
pipes := make([]*pipeline, 0, len(readers))
for _, r := range readers {
p := newPipeline(res, r, views)
r.register(p)
pipes = append(pipes, p)
}
return pipes
}
func (p pipelines) registerMultiCallback(c multiCallback) metric.Registration {
unregs := make([]func(), len(p))
for i, pipe := range p {
unregs[i] = pipe.addMultiCallback(c)
}
return unregisterFuncs{f: unregs}
}
type unregisterFuncs struct {
embedded.Registration
f []func()
}
func (u unregisterFuncs) Unregister() error {
for _, f := range u.f {
f()
}
return nil
}
// resolver facilitates resolving aggregate functions an instrument calls to
// aggregate measurements with while updating all pipelines that need to pull
// from those aggregations.
type resolver[N int64 | float64] struct {
inserters []*inserter[N]
}
func newResolver[N int64 | float64](p pipelines, vc *cache[string, instID]) resolver[N] {
in := make([]*inserter[N], len(p))
for i := range in {
in[i] = newInserter[N](p[i], vc)
}
return resolver[N]{in}
}
// Aggregators returns the Aggregators that must be updated by the instrument
// defined by key.
func (r resolver[N]) Aggregators(id Instrument) ([]aggregate.Measure[N], error) {
var measures []aggregate.Measure[N]
errs := &multierror{}
for _, i := range r.inserters {
in, err := i.Instrument(id, i.readerDefaultAggregation(id.Kind))
if err != nil {
errs.append(err)
}
measures = append(measures, in...)
}
return measures, errs.errorOrNil()
}
// HistogramAggregators returns the histogram Aggregators that must be updated by the instrument
// defined by key. If boundaries were provided on instrument instantiation, those take precedence
// over boundaries provided by the reader.
func (r resolver[N]) HistogramAggregators(id Instrument, boundaries []float64) ([]aggregate.Measure[N], error) {
var measures []aggregate.Measure[N]
errs := &multierror{}
for _, i := range r.inserters {
agg := i.readerDefaultAggregation(id.Kind)
if histAgg, ok := agg.(AggregationExplicitBucketHistogram); ok && len(boundaries) > 0 {
histAgg.Boundaries = boundaries
agg = histAgg
}
in, err := i.Instrument(id, agg)
if err != nil {
errs.append(err)
}
measures = append(measures, in...)
}
return measures, errs.errorOrNil()
}
type multierror struct {
wrapped error
errors []string
}
func (m *multierror) errorOrNil() error {
if len(m.errors) == 0 {
return nil
}
if m.wrapped == nil {
return errors.New(strings.Join(m.errors, "; "))
}
return fmt.Errorf("%w: %s", m.wrapped, strings.Join(m.errors, "; "))
}
func (m *multierror) append(err error) {
m.errors = append(m.errors, err.Error())
}

143
vendor/go.opentelemetry.io/otel/sdk/metric/provider.go generated vendored Normal file
View File

@ -0,0 +1,143 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"context"
"sync/atomic"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/metric/embedded"
"go.opentelemetry.io/otel/metric/noop"
"go.opentelemetry.io/otel/sdk/instrumentation"
)
// MeterProvider handles the creation and coordination of Meters. All Meters
// created by a MeterProvider will be associated with the same Resource, have
// the same Views applied to them, and have their produced metric telemetry
// passed to the configured Readers.
type MeterProvider struct {
embedded.MeterProvider
pipes pipelines
meters cache[instrumentation.Scope, *meter]
forceFlush, shutdown func(context.Context) error
stopped atomic.Bool
}
// Compile-time check MeterProvider implements metric.MeterProvider.
var _ metric.MeterProvider = (*MeterProvider)(nil)
// NewMeterProvider returns a new and configured MeterProvider.
//
// By default, the returned MeterProvider is configured with the default
// Resource and no Readers. Readers cannot be added after a MeterProvider is
// created. This means the returned MeterProvider, one created with no
// Readers, will perform no operations.
func NewMeterProvider(options ...Option) *MeterProvider {
conf := newConfig(options)
flush, sdown := conf.readerSignals()
mp := &MeterProvider{
pipes: newPipelines(conf.res, conf.readers, conf.views),
forceFlush: flush,
shutdown: sdown,
}
// Log after creation so all readers show correctly they are registered.
global.Info("MeterProvider created",
"Resource", conf.res,
"Readers", conf.readers,
"Views", len(conf.views),
)
return mp
}
// Meter returns a Meter with the given name and configured with options.
//
// The name should be the name of the instrumentation scope creating
// telemetry. This name may be the same as the instrumented code only if that
// code provides built-in instrumentation.
//
// Calls to the Meter method after Shutdown has been called will return Meters
// that perform no operations.
//
// This method is safe to call concurrently.
func (mp *MeterProvider) Meter(name string, options ...metric.MeterOption) metric.Meter {
if name == "" {
global.Warn("Invalid Meter name.", "name", name)
}
if mp.stopped.Load() {
return noop.Meter{}
}
c := metric.NewMeterConfig(options...)
s := instrumentation.Scope{
Name: name,
Version: c.InstrumentationVersion(),
SchemaURL: c.SchemaURL(),
}
global.Info("Meter created",
"Name", s.Name,
"Version", s.Version,
"SchemaURL", s.SchemaURL,
)
return mp.meters.Lookup(s, func() *meter {
return newMeter(s, mp.pipes)
})
}
// ForceFlush flushes all pending telemetry.
//
// This method honors the deadline or cancellation of ctx. An appropriate
// error will be returned in these situations. There is no guaranteed that all
// telemetry be flushed or all resources have been released in these
// situations.
//
// ForceFlush calls ForceFlush(context.Context) error
// on all Readers that implements this method.
//
// This method is safe to call concurrently.
func (mp *MeterProvider) ForceFlush(ctx context.Context) error {
if mp.forceFlush != nil {
return mp.forceFlush(ctx)
}
return nil
}
// Shutdown shuts down the MeterProvider flushing all pending telemetry and
// releasing any held computational resources.
//
// This call is idempotent. The first call will perform all flush and
// releasing operations. Subsequent calls will perform no action and will
// return an error stating this.
//
// Measurements made by instruments from meters this MeterProvider created
// will not be exported after Shutdown is called.
//
// This method honors the deadline or cancellation of ctx. An appropriate
// error will be returned in these situations. There is no guaranteed that all
// telemetry be flushed or all resources have been released in these
// situations.
//
// This method is safe to call concurrently.
func (mp *MeterProvider) Shutdown(ctx context.Context) error {
// Even though it may seem like there is a synchronization issue between the
// call to `Store` and checking `shutdown`, the Go concurrency model ensures
// that is not the case, as all the atomic operations executed in a program
// behave as though executed in some sequentially consistent order. This
// definition provides the same semantics as C++'s sequentially consistent
// atomics and Java's volatile variables.
// See https://go.dev/ref/mem#atomic and https://pkg.go.dev/sync/atomic.
mp.stopped.Store(true)
if mp.shutdown != nil {
return mp.shutdown(ctx)
}
return nil
}

189
vendor/go.opentelemetry.io/otel/sdk/metric/reader.go generated vendored Normal file
View File

@ -0,0 +1,189 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"context"
"fmt"
"go.opentelemetry.io/otel/sdk/metric/metricdata"
)
// errDuplicateRegister is logged by a Reader when an attempt to registered it
// more than once occurs.
var errDuplicateRegister = fmt.Errorf("duplicate reader registration")
// ErrReaderNotRegistered is returned if Collect or Shutdown are called before
// the reader is registered with a MeterProvider.
var ErrReaderNotRegistered = fmt.Errorf("reader is not registered")
// ErrReaderShutdown is returned if Collect or Shutdown are called after a
// reader has been Shutdown once.
var ErrReaderShutdown = fmt.Errorf("reader is shutdown")
// errNonPositiveDuration is logged when an environmental variable
// has non-positive value.
var errNonPositiveDuration = fmt.Errorf("non-positive duration")
// Reader is the interface used between the SDK and an
// exporter. Control flow is bi-directional through the
// Reader, since the SDK initiates ForceFlush and Shutdown
// while the exporter initiates collection. The Register() method here
// informs the Reader that it can begin reading, signaling the
// start of bi-directional control flow.
//
// Typically, push-based exporters that are periodic will
// implement PeroidicExporter themselves and construct a
// PeriodicReader to satisfy this interface.
//
// Pull-based exporters will typically implement Register
// themselves, since they read on demand.
//
// Warning: methods may be added to this interface in minor releases.
type Reader interface {
// register registers a Reader with a MeterProvider.
// The producer argument allows the Reader to signal the sdk to collect
// and send aggregated metric measurements.
register(sdkProducer)
// temporality reports the Temporality for the instrument kind provided.
//
// This method needs to be concurrent safe with itself and all the other
// Reader methods.
temporality(InstrumentKind) metricdata.Temporality
// aggregation returns what Aggregation to use for an instrument kind.
//
// This method needs to be concurrent safe with itself and all the other
// Reader methods.
aggregation(InstrumentKind) Aggregation // nolint:revive // import-shadow for method scoped by type.
// Collect gathers and returns all metric data related to the Reader from
// the SDK and stores it in out. An error is returned if this is called
// after Shutdown or if out is nil.
//
// This method needs to be concurrent safe, and the cancellation of the
// passed context is expected to be honored.
Collect(ctx context.Context, rm *metricdata.ResourceMetrics) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Shutdown flushes all metric measurements held in an export pipeline and releases any
// held computational resources.
//
// This deadline or cancellation of the passed context are honored. An appropriate
// error will be returned in these situations. There is no guaranteed that all
// telemetry be flushed or all resources have been released in these
// situations.
//
// After Shutdown is called, calls to Collect will perform no operation and instead will return
// an error indicating the shutdown state.
//
// This method needs to be concurrent safe.
Shutdown(context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
// sdkProducer produces metrics for a Reader.
type sdkProducer interface {
// produce returns aggregated metrics from a single collection.
//
// This method is safe to call concurrently.
produce(context.Context, *metricdata.ResourceMetrics) error
}
// Producer produces metrics for a Reader from an external source.
type Producer interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Produce returns aggregated metrics from an external source.
//
// This method should be safe to call concurrently.
Produce(context.Context) ([]metricdata.ScopeMetrics, error)
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
// produceHolder is used as an atomic.Value to wrap the non-concrete producer
// type.
type produceHolder struct {
produce func(context.Context, *metricdata.ResourceMetrics) error
}
// shutdownProducer produces an ErrReaderShutdown error always.
type shutdownProducer struct{}
// produce returns an ErrReaderShutdown error.
func (p shutdownProducer) produce(context.Context, *metricdata.ResourceMetrics) error {
return ErrReaderShutdown
}
// TemporalitySelector selects the temporality to use based on the InstrumentKind.
type TemporalitySelector func(InstrumentKind) metricdata.Temporality
// DefaultTemporalitySelector is the default TemporalitySelector used if
// WithTemporalitySelector is not provided. CumulativeTemporality will be used
// for all instrument kinds if this TemporalitySelector is used.
func DefaultTemporalitySelector(InstrumentKind) metricdata.Temporality {
return metricdata.CumulativeTemporality
}
// AggregationSelector selects the aggregation and the parameters to use for
// that aggregation based on the InstrumentKind.
//
// If the Aggregation returned is nil or DefaultAggregation, the selection from
// DefaultAggregationSelector will be used.
type AggregationSelector func(InstrumentKind) Aggregation
// DefaultAggregationSelector returns the default aggregation and parameters
// that will be used to summarize measurement made from an instrument of
// InstrumentKind. This AggregationSelector using the following selection
// mapping: Counter ⇨ Sum, Observable Counter ⇨ Sum, UpDownCounter ⇨ Sum,
// Observable UpDownCounter ⇨ Sum, Observable Gauge ⇨ LastValue,
// Histogram ⇨ ExplicitBucketHistogram.
func DefaultAggregationSelector(ik InstrumentKind) Aggregation {
switch ik {
case InstrumentKindCounter, InstrumentKindUpDownCounter, InstrumentKindObservableCounter, InstrumentKindObservableUpDownCounter:
return AggregationSum{}
case InstrumentKindObservableGauge, InstrumentKindGauge:
return AggregationLastValue{}
case InstrumentKindHistogram:
return AggregationExplicitBucketHistogram{
Boundaries: []float64{0, 5, 10, 25, 50, 75, 100, 250, 500, 750, 1000, 2500, 5000, 7500, 10000},
NoMinMax: false,
}
}
panic("unknown instrument kind")
}
// ReaderOption is an option which can be applied to manual or Periodic
// readers.
type ReaderOption interface {
PeriodicReaderOption
ManualReaderOption
}
// WithProducer registers producers as an external Producer of metric data
// for this Reader.
func WithProducer(p Producer) ReaderOption {
return producerOption{p: p}
}
type producerOption struct {
p Producer
}
// applyManual returns a manualReaderConfig with option applied.
func (o producerOption) applyManual(c manualReaderConfig) manualReaderConfig {
c.producers = append(c.producers, o.p)
return c
}
// applyPeriodic returns a periodicReaderConfig with option applied.
func (o producerOption) applyPeriodic(c periodicReaderConfig) periodicReaderConfig {
c.producers = append(c.producers, o.p)
return c
}

View File

@ -0,0 +1,9 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
// version is the current release version of the metric SDK in use.
func version() string {
return "1.28.0"
}

117
vendor/go.opentelemetry.io/otel/sdk/metric/view.go generated vendored Normal file
View File

@ -0,0 +1,117 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package metric // import "go.opentelemetry.io/otel/sdk/metric"
import (
"errors"
"regexp"
"strings"
"go.opentelemetry.io/otel/internal/global"
)
var (
errMultiInst = errors.New("name replacement for multiple instruments")
errEmptyView = errors.New("no criteria provided for view")
emptyView = func(Instrument) (Stream, bool) { return Stream{}, false }
)
// View is an override to the default behavior of the SDK. It defines how data
// should be collected for certain instruments. It returns true and the exact
// Stream to use for matching Instruments. Otherwise, if the view does not
// match, false is returned.
type View func(Instrument) (Stream, bool)
// NewView returns a View that applies the Stream mask for all instruments that
// match criteria. The returned View will only apply mask if all non-zero-value
// fields of criteria match the corresponding Instrument passed to the view. If
// no criteria are provided, all field of criteria are their zero-values, a
// view that matches no instruments is returned. If you need to match a
// zero-value field, create a View directly.
//
// The Name field of criteria supports wildcard pattern matching. The "*"
// wildcard is recognized as matching zero or more characters, and "?" is
// recognized as matching exactly one character. For example, a pattern of "*"
// matches all instrument names.
//
// The Stream mask only applies updates for non-zero-value fields. By default,
// the Instrument the View matches against will be use for the Name,
// Description, and Unit of the returned Stream and no Aggregation or
// AttributeFilter are set. All non-zero-value fields of mask are used instead
// of the default. If you need to zero out an Stream field returned from a
// View, create a View directly.
func NewView(criteria Instrument, mask Stream) View {
if criteria.IsEmpty() {
global.Error(
errEmptyView, "dropping view",
"mask", mask,
)
return emptyView
}
var matchFunc func(Instrument) bool
if strings.ContainsAny(criteria.Name, "*?") {
if mask.Name != "" {
global.Error(
errMultiInst, "dropping view",
"criteria", criteria,
"mask", mask,
)
return emptyView
}
// Handle branching here in NewView instead of criteria.matches so
// criteria.matches remains inlinable for the simple case.
pattern := regexp.QuoteMeta(criteria.Name)
pattern = "^" + pattern + "$"
pattern = strings.ReplaceAll(pattern, `\?`, ".")
pattern = strings.ReplaceAll(pattern, `\*`, ".*")
re := regexp.MustCompile(pattern)
matchFunc = func(i Instrument) bool {
return re.MatchString(i.Name) &&
criteria.matchesDescription(i) &&
criteria.matchesKind(i) &&
criteria.matchesUnit(i) &&
criteria.matchesScope(i)
}
} else {
matchFunc = criteria.matches
}
var agg Aggregation
if mask.Aggregation != nil {
agg = mask.Aggregation.copy()
if err := agg.err(); err != nil {
global.Error(
err, "not using aggregation with view",
"criteria", criteria,
"mask", mask,
)
agg = nil
}
}
return func(i Instrument) (Stream, bool) {
if matchFunc(i) {
return Stream{
Name: nonZero(mask.Name, i.Name),
Description: nonZero(mask.Description, i.Description),
Unit: nonZero(mask.Unit, i.Unit),
Aggregation: agg,
AttributeFilter: mask.AttributeFilter,
}, true
}
return Stream{}, false
}
}
// nonZero returns v if it is non-zero-valued, otherwise alt.
func nonZero[T comparable](v, alt T) T {
var zero T
if v != zero {
return v
}
return alt
}

View File

@ -0,0 +1,3 @@
# SDK Resource
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/sdk/resource)](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource)

118
vendor/go.opentelemetry.io/otel/sdk/resource/auto.go generated vendored Normal file
View File

@ -0,0 +1,118 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"errors"
"fmt"
"strings"
)
// ErrPartialResource is returned by a detector when complete source
// information for a Resource is unavailable or the source information
// contains invalid values that are omitted from the returned Resource.
var ErrPartialResource = errors.New("partial resource")
// Detector detects OpenTelemetry resource information.
type Detector interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Detect returns an initialized Resource based on gathered information.
// If the source information to construct a Resource contains invalid
// values, a Resource is returned with the valid parts of the source
// information used for initialization along with an appropriately
// wrapped ErrPartialResource error.
Detect(ctx context.Context) (*Resource, error)
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
// Detect returns a new [Resource] merged from all the Resources each of the
// detectors produces. Each of the detectors are called sequentially, in the
// order they are passed, merging the produced resource into the previous.
//
// This may return a partial Resource along with an error containing
// [ErrPartialResource] if that error is returned from a detector. It may also
// return a merge-conflicting Resource along with an error containing
// [ErrSchemaURLConflict] if merging Resources from different detectors results
// in a schema URL conflict. It is up to the caller to determine if this
// returned Resource should be used or not.
//
// If one of the detectors returns an error that is not [ErrPartialResource],
// the resource produced by the detector will not be merged and the returned
// error will wrap that detector's error.
func Detect(ctx context.Context, detectors ...Detector) (*Resource, error) {
r := new(Resource)
return r, detect(ctx, r, detectors)
}
// detect runs all detectors using ctx and merges the result into res. This
// assumes res is allocated and not nil, it will panic otherwise.
//
// If the detectors or merging resources produces any errors (i.e.
// [ErrPartialResource] [ErrSchemaURLConflict]), a single error wrapping all of
// these errors will be returned. Otherwise, nil is returned.
func detect(ctx context.Context, res *Resource, detectors []Detector) error {
var (
r *Resource
errs detectErrs
err error
)
for _, detector := range detectors {
if detector == nil {
continue
}
r, err = detector.Detect(ctx)
if err != nil {
errs = append(errs, err)
if !errors.Is(err, ErrPartialResource) {
continue
}
}
r, err = Merge(res, r)
if err != nil {
errs = append(errs, err)
}
*res = *r
}
if len(errs) == 0 {
return nil
}
if errors.Is(errs, ErrSchemaURLConflict) {
// If there has been a merge conflict, ensure the resource has no
// schema URL.
res.schemaURL = ""
}
return errs
}
type detectErrs []error
func (e detectErrs) Error() string {
errStr := make([]string, len(e))
for i, err := range e {
errStr[i] = fmt.Sprintf("* %s", err)
}
format := "%d errors occurred detecting resource:\n\t%s"
return fmt.Sprintf(format, len(e), strings.Join(errStr, "\n\t"))
}
func (e detectErrs) Unwrap() error {
switch len(e) {
case 0:
return nil
case 1:
return e[0]
}
return e[1:]
}
func (e detectErrs) Is(target error) bool {
return len(e) != 0 && errors.Is(e[0], target)
}

118
vendor/go.opentelemetry.io/otel/sdk/resource/builtin.go generated vendored Normal file
View File

@ -0,0 +1,118 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"fmt"
"os"
"path/filepath"
"github.com/google/uuid"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)
type (
// telemetrySDK is a Detector that provides information about
// the OpenTelemetry SDK used. This Detector is included as a
// builtin. If these resource attributes are not wanted, use
// the WithTelemetrySDK(nil) or WithoutBuiltin() options to
// explicitly disable them.
telemetrySDK struct{}
// host is a Detector that provides information about the host
// being run on. This Detector is included as a builtin. If
// these resource attributes are not wanted, use the
// WithHost(nil) or WithoutBuiltin() options to explicitly
// disable them.
host struct{}
stringDetector struct {
schemaURL string
K attribute.Key
F func() (string, error)
}
defaultServiceNameDetector struct{}
defaultServiceInstanceIDDetector struct{}
)
var (
_ Detector = telemetrySDK{}
_ Detector = host{}
_ Detector = stringDetector{}
_ Detector = defaultServiceNameDetector{}
_ Detector = defaultServiceInstanceIDDetector{}
)
// Detect returns a *Resource that describes the OpenTelemetry SDK used.
func (telemetrySDK) Detect(context.Context) (*Resource, error) {
return NewWithAttributes(
semconv.SchemaURL,
semconv.TelemetrySDKName("opentelemetry"),
semconv.TelemetrySDKLanguageGo,
semconv.TelemetrySDKVersion(sdk.Version()),
), nil
}
// Detect returns a *Resource that describes the host being run on.
func (host) Detect(ctx context.Context) (*Resource, error) {
return StringDetector(semconv.SchemaURL, semconv.HostNameKey, os.Hostname).Detect(ctx)
}
// StringDetector returns a Detector that will produce a *Resource
// containing the string as a value corresponding to k. The resulting Resource
// will have the specified schemaURL.
func StringDetector(schemaURL string, k attribute.Key, f func() (string, error)) Detector {
return stringDetector{schemaURL: schemaURL, K: k, F: f}
}
// Detect returns a *Resource that describes the string as a value
// corresponding to attribute.Key as well as the specific schemaURL.
func (sd stringDetector) Detect(ctx context.Context) (*Resource, error) {
value, err := sd.F()
if err != nil {
return nil, fmt.Errorf("%s: %w", string(sd.K), err)
}
a := sd.K.String(value)
if !a.Valid() {
return nil, fmt.Errorf("invalid attribute: %q -> %q", a.Key, a.Value.Emit())
}
return NewWithAttributes(sd.schemaURL, sd.K.String(value)), nil
}
// Detect implements Detector.
func (defaultServiceNameDetector) Detect(ctx context.Context) (*Resource, error) {
return StringDetector(
semconv.SchemaURL,
semconv.ServiceNameKey,
func() (string, error) {
executable, err := os.Executable()
if err != nil {
return "unknown_service:go", nil
}
return "unknown_service:" + filepath.Base(executable), nil
},
).Detect(ctx)
}
// Detect implements Detector.
func (defaultServiceInstanceIDDetector) Detect(ctx context.Context) (*Resource, error) {
return StringDetector(
semconv.SchemaURL,
semconv.ServiceInstanceIDKey,
func() (string, error) {
version4Uuid, err := uuid.NewRandom()
if err != nil {
return "", err
}
return version4Uuid.String(), nil
},
).Detect(ctx)
}

195
vendor/go.opentelemetry.io/otel/sdk/resource/config.go generated vendored Normal file
View File

@ -0,0 +1,195 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"go.opentelemetry.io/otel/attribute"
)
// config contains configuration for Resource creation.
type config struct {
// detectors that will be evaluated.
detectors []Detector
// SchemaURL to associate with the Resource.
schemaURL string
}
// Option is the interface that applies a configuration option.
type Option interface {
// apply sets the Option value of a config.
apply(config) config
}
// WithAttributes adds attributes to the configured Resource.
func WithAttributes(attributes ...attribute.KeyValue) Option {
return WithDetectors(detectAttributes{attributes})
}
type detectAttributes struct {
attributes []attribute.KeyValue
}
func (d detectAttributes) Detect(context.Context) (*Resource, error) {
return NewSchemaless(d.attributes...), nil
}
// WithDetectors adds detectors to be evaluated for the configured resource.
func WithDetectors(detectors ...Detector) Option {
return detectorsOption{detectors: detectors}
}
type detectorsOption struct {
detectors []Detector
}
func (o detectorsOption) apply(cfg config) config {
cfg.detectors = append(cfg.detectors, o.detectors...)
return cfg
}
// WithFromEnv adds attributes from environment variables to the configured resource.
func WithFromEnv() Option {
return WithDetectors(fromEnv{})
}
// WithHost adds attributes from the host to the configured resource.
func WithHost() Option {
return WithDetectors(host{})
}
// WithHostID adds host ID information to the configured resource.
func WithHostID() Option {
return WithDetectors(hostIDDetector{})
}
// WithTelemetrySDK adds TelemetrySDK version info to the configured resource.
func WithTelemetrySDK() Option {
return WithDetectors(telemetrySDK{})
}
// WithSchemaURL sets the schema URL for the configured resource.
func WithSchemaURL(schemaURL string) Option {
return schemaURLOption(schemaURL)
}
type schemaURLOption string
func (o schemaURLOption) apply(cfg config) config {
cfg.schemaURL = string(o)
return cfg
}
// WithOS adds all the OS attributes to the configured Resource.
// See individual WithOS* functions to configure specific attributes.
func WithOS() Option {
return WithDetectors(
osTypeDetector{},
osDescriptionDetector{},
)
}
// WithOSType adds an attribute with the operating system type to the configured Resource.
func WithOSType() Option {
return WithDetectors(osTypeDetector{})
}
// WithOSDescription adds an attribute with the operating system description to the
// configured Resource. The formatted string is equivalent to the output of the
// `uname -snrvm` command.
func WithOSDescription() Option {
return WithDetectors(osDescriptionDetector{})
}
// WithProcess adds all the Process attributes to the configured Resource.
//
// Warning! This option will include process command line arguments. If these
// contain sensitive information it will be included in the exported resource.
//
// This option is equivalent to calling WithProcessPID,
// WithProcessExecutableName, WithProcessExecutablePath,
// WithProcessCommandArgs, WithProcessOwner, WithProcessRuntimeName,
// WithProcessRuntimeVersion, and WithProcessRuntimeDescription. See each
// option function for information about what resource attributes each
// includes.
func WithProcess() Option {
return WithDetectors(
processPIDDetector{},
processExecutableNameDetector{},
processExecutablePathDetector{},
processCommandArgsDetector{},
processOwnerDetector{},
processRuntimeNameDetector{},
processRuntimeVersionDetector{},
processRuntimeDescriptionDetector{},
)
}
// WithProcessPID adds an attribute with the process identifier (PID) to the
// configured Resource.
func WithProcessPID() Option {
return WithDetectors(processPIDDetector{})
}
// WithProcessExecutableName adds an attribute with the name of the process
// executable to the configured Resource.
func WithProcessExecutableName() Option {
return WithDetectors(processExecutableNameDetector{})
}
// WithProcessExecutablePath adds an attribute with the full path to the process
// executable to the configured Resource.
func WithProcessExecutablePath() Option {
return WithDetectors(processExecutablePathDetector{})
}
// WithProcessCommandArgs adds an attribute with all the command arguments (including
// the command/executable itself) as received by the process to the configured
// Resource.
//
// Warning! This option will include process command line arguments. If these
// contain sensitive information it will be included in the exported resource.
func WithProcessCommandArgs() Option {
return WithDetectors(processCommandArgsDetector{})
}
// WithProcessOwner adds an attribute with the username of the user that owns the process
// to the configured Resource.
func WithProcessOwner() Option {
return WithDetectors(processOwnerDetector{})
}
// WithProcessRuntimeName adds an attribute with the name of the runtime of this
// process to the configured Resource.
func WithProcessRuntimeName() Option {
return WithDetectors(processRuntimeNameDetector{})
}
// WithProcessRuntimeVersion adds an attribute with the version of the runtime of
// this process to the configured Resource.
func WithProcessRuntimeVersion() Option {
return WithDetectors(processRuntimeVersionDetector{})
}
// WithProcessRuntimeDescription adds an attribute with an additional description
// about the runtime of the process to the configured Resource.
func WithProcessRuntimeDescription() Option {
return WithDetectors(processRuntimeDescriptionDetector{})
}
// WithContainer adds all the Container attributes to the configured Resource.
// See individual WithContainer* functions to configure specific attributes.
func WithContainer() Option {
return WithDetectors(
cgroupContainerIDDetector{},
)
}
// WithContainerID adds an attribute with the id of the container to the configured Resource.
// Note: WithContainerID will not extract the correct container ID in an ECS environment.
// Please use the ECS resource detector instead (https://pkg.go.dev/go.opentelemetry.io/contrib/detectors/aws/ecs).
func WithContainerID() Option {
return WithDetectors(cgroupContainerIDDetector{})
}

View File

@ -0,0 +1,89 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"bufio"
"context"
"errors"
"io"
"os"
"regexp"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)
type containerIDProvider func() (string, error)
var (
containerID containerIDProvider = getContainerIDFromCGroup
cgroupContainerIDRe = regexp.MustCompile(`^.*/(?:.*[-:])?([0-9a-f]+)(?:\.|\s*$)`)
)
type cgroupContainerIDDetector struct{}
const cgroupPath = "/proc/self/cgroup"
// Detect returns a *Resource that describes the id of the container.
// If no container id found, an empty resource will be returned.
func (cgroupContainerIDDetector) Detect(ctx context.Context) (*Resource, error) {
containerID, err := containerID()
if err != nil {
return nil, err
}
if containerID == "" {
return Empty(), nil
}
return NewWithAttributes(semconv.SchemaURL, semconv.ContainerID(containerID)), nil
}
var (
defaultOSStat = os.Stat
osStat = defaultOSStat
defaultOSOpen = func(name string) (io.ReadCloser, error) {
return os.Open(name)
}
osOpen = defaultOSOpen
)
// getContainerIDFromCGroup returns the id of the container from the cgroup file.
// If no container id found, an empty string will be returned.
func getContainerIDFromCGroup() (string, error) {
if _, err := osStat(cgroupPath); errors.Is(err, os.ErrNotExist) {
// File does not exist, skip
return "", nil
}
file, err := osOpen(cgroupPath)
if err != nil {
return "", err
}
defer file.Close()
return getContainerIDFromReader(file), nil
}
// getContainerIDFromReader returns the id of the container from reader.
func getContainerIDFromReader(reader io.Reader) string {
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
line := scanner.Text()
if id := getContainerIDFromLine(line); id != "" {
return id
}
}
return ""
}
// getContainerIDFromLine returns the id of the container from one string line.
func getContainerIDFromLine(line string) string {
matches := cgroupContainerIDRe.FindStringSubmatch(line)
if len(matches) <= 1 {
return ""
}
return matches[1]
}

20
vendor/go.opentelemetry.io/otel/sdk/resource/doc.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package resource provides detecting and representing resources.
//
// The fundamental struct is a Resource which holds identifying information
// about the entities for which telemetry is exported.
//
// To automatically construct Resources from an environment a Detector
// interface is defined. Implementations of this interface can be passed to
// the Detect function to generate a Resource from the merged information.
//
// To load a user defined Resource from the environment variable
// OTEL_RESOURCE_ATTRIBUTES the FromEnv Detector can be used. It will interpret
// the value as a list of comma delimited key/value pairs
// (e.g. `<key1>=<value1>,<key2>=<value2>,...`).
//
// While this package provides a stable API,
// the attributes added by resource detectors may change.
package resource // import "go.opentelemetry.io/otel/sdk/resource"

95
vendor/go.opentelemetry.io/otel/sdk/resource/env.go generated vendored Normal file
View File

@ -0,0 +1,95 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"fmt"
"net/url"
"os"
"strings"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)
const (
// resourceAttrKey is the environment variable name OpenTelemetry Resource information will be read from.
resourceAttrKey = "OTEL_RESOURCE_ATTRIBUTES" //nolint:gosec // False positive G101: Potential hardcoded credentials
// svcNameKey is the environment variable name that Service Name information will be read from.
svcNameKey = "OTEL_SERVICE_NAME"
)
// errMissingValue is returned when a resource value is missing.
var errMissingValue = fmt.Errorf("%w: missing value", ErrPartialResource)
// fromEnv is a Detector that implements the Detector and collects
// resources from environment. This Detector is included as a
// builtin.
type fromEnv struct{}
// compile time assertion that FromEnv implements Detector interface.
var _ Detector = fromEnv{}
// Detect collects resources from environment.
func (fromEnv) Detect(context.Context) (*Resource, error) {
attrs := strings.TrimSpace(os.Getenv(resourceAttrKey))
svcName := strings.TrimSpace(os.Getenv(svcNameKey))
if attrs == "" && svcName == "" {
return Empty(), nil
}
var res *Resource
if svcName != "" {
res = NewSchemaless(semconv.ServiceName(svcName))
}
r2, err := constructOTResources(attrs)
// Ensure that the resource with the service name from OTEL_SERVICE_NAME
// takes precedence, if it was defined.
res, err2 := Merge(r2, res)
if err == nil {
err = err2
} else if err2 != nil {
err = fmt.Errorf("detecting resources: %s", []string{err.Error(), err2.Error()})
}
return res, err
}
func constructOTResources(s string) (*Resource, error) {
if s == "" {
return Empty(), nil
}
pairs := strings.Split(s, ",")
var attrs []attribute.KeyValue
var invalid []string
for _, p := range pairs {
k, v, found := strings.Cut(p, "=")
if !found {
invalid = append(invalid, p)
continue
}
key := strings.TrimSpace(k)
val, err := url.PathUnescape(strings.TrimSpace(v))
if err != nil {
// Retain original value if decoding fails, otherwise it will be
// an empty string.
val = v
otel.Handle(err)
}
attrs = append(attrs, attribute.String(key, val))
}
var err error
if len(invalid) > 0 {
err = fmt.Errorf("%w: %v", errMissingValue, invalid)
}
return NewSchemaless(attrs...), err
}

109
vendor/go.opentelemetry.io/otel/sdk/resource/host_id.go generated vendored Normal file
View File

@ -0,0 +1,109 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"errors"
"strings"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)
type hostIDProvider func() (string, error)
var defaultHostIDProvider hostIDProvider = platformHostIDReader.read
var hostID = defaultHostIDProvider
type hostIDReader interface {
read() (string, error)
}
type fileReader func(string) (string, error)
type commandExecutor func(string, ...string) (string, error)
// hostIDReaderBSD implements hostIDReader.
type hostIDReaderBSD struct {
execCommand commandExecutor
readFile fileReader
}
// read attempts to read the machine-id from /etc/hostid. If not found it will
// execute `kenv -q smbios.system.uuid`. If neither location yields an id an
// error will be returned.
func (r *hostIDReaderBSD) read() (string, error) {
if result, err := r.readFile("/etc/hostid"); err == nil {
return strings.TrimSpace(result), nil
}
if result, err := r.execCommand("kenv", "-q", "smbios.system.uuid"); err == nil {
return strings.TrimSpace(result), nil
}
return "", errors.New("host id not found in: /etc/hostid or kenv")
}
// hostIDReaderDarwin implements hostIDReader.
type hostIDReaderDarwin struct {
execCommand commandExecutor
}
// read executes `ioreg -rd1 -c "IOPlatformExpertDevice"` and parses host id
// from the IOPlatformUUID line. If the command fails or the uuid cannot be
// parsed an error will be returned.
func (r *hostIDReaderDarwin) read() (string, error) {
result, err := r.execCommand("ioreg", "-rd1", "-c", "IOPlatformExpertDevice")
if err != nil {
return "", err
}
lines := strings.Split(result, "\n")
for _, line := range lines {
if strings.Contains(line, "IOPlatformUUID") {
parts := strings.Split(line, " = ")
if len(parts) == 2 {
return strings.Trim(parts[1], "\""), nil
}
break
}
}
return "", errors.New("could not parse IOPlatformUUID")
}
type hostIDReaderLinux struct {
readFile fileReader
}
// read attempts to read the machine-id from /etc/machine-id followed by
// /var/lib/dbus/machine-id. If neither location yields an ID an error will
// be returned.
func (r *hostIDReaderLinux) read() (string, error) {
if result, err := r.readFile("/etc/machine-id"); err == nil {
return strings.TrimSpace(result), nil
}
if result, err := r.readFile("/var/lib/dbus/machine-id"); err == nil {
return strings.TrimSpace(result), nil
}
return "", errors.New("host id not found in: /etc/machine-id or /var/lib/dbus/machine-id")
}
type hostIDDetector struct{}
// Detect returns a *Resource containing the platform specific host id.
func (hostIDDetector) Detect(ctx context.Context) (*Resource, error) {
hostID, err := hostID()
if err != nil {
return nil, err
}
return NewWithAttributes(
semconv.SchemaURL,
semconv.HostID(hostID),
), nil
}

View File

@ -0,0 +1,12 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:build dragonfly || freebsd || netbsd || openbsd || solaris
// +build dragonfly freebsd netbsd openbsd solaris
package resource // import "go.opentelemetry.io/otel/sdk/resource"
var platformHostIDReader hostIDReader = &hostIDReaderBSD{
execCommand: execCommand,
readFile: readFile,
}

View File

@ -0,0 +1,8 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
var platformHostIDReader hostIDReader = &hostIDReaderDarwin{
execCommand: execCommand,
}

View File

@ -0,0 +1,18 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:build darwin || dragonfly || freebsd || netbsd || openbsd || solaris
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import "os/exec"
func execCommand(name string, arg ...string) (string, error) {
cmd := exec.Command(name, arg...)
b, err := cmd.Output()
if err != nil {
return "", err
}
return string(b), nil
}

View File

@ -0,0 +1,11 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:build linux
// +build linux
package resource // import "go.opentelemetry.io/otel/sdk/resource"
var platformHostIDReader hostIDReader = &hostIDReaderLinux{
readFile: readFile,
}

View File

@ -0,0 +1,17 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:build linux || dragonfly || freebsd || netbsd || openbsd || solaris
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import "os"
func readFile(filename string) (string, error) {
b, err := os.ReadFile(filename)
if err != nil {
return "", err
}
return string(b), nil
}

View File

@ -0,0 +1,19 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:build !darwin && !dragonfly && !freebsd && !linux && !netbsd && !openbsd && !solaris && !windows
// +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris,!windows
package resource // import "go.opentelemetry.io/otel/sdk/resource"
// hostIDReaderUnsupported is a placeholder implementation for operating systems
// for which this project currently doesn't support host.id
// attribute detection. See build tags declaration early on this file
// for a list of unsupported OSes.
type hostIDReaderUnsupported struct{}
func (*hostIDReaderUnsupported) read() (string, error) {
return "<unknown>", nil
}
var platformHostIDReader hostIDReader = &hostIDReaderUnsupported{}

View File

@ -0,0 +1,37 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:build windows
// +build windows
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"golang.org/x/sys/windows/registry"
)
// implements hostIDReader
type hostIDReaderWindows struct{}
// read reads MachineGuid from the windows registry key:
// SOFTWARE\Microsoft\Cryptography
func (*hostIDReaderWindows) read() (string, error) {
k, err := registry.OpenKey(
registry.LOCAL_MACHINE, `SOFTWARE\Microsoft\Cryptography`,
registry.QUERY_VALUE|registry.WOW64_64KEY,
)
if err != nil {
return "", err
}
defer k.Close()
guid, _, err := k.GetStringValue("MachineGuid")
if err != nil {
return "", err
}
return guid, nil
}
var platformHostIDReader hostIDReader = &hostIDReaderWindows{}

89
vendor/go.opentelemetry.io/otel/sdk/resource/os.go generated vendored Normal file
View File

@ -0,0 +1,89 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"strings"
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)
type osDescriptionProvider func() (string, error)
var defaultOSDescriptionProvider osDescriptionProvider = platformOSDescription
var osDescription = defaultOSDescriptionProvider
func setDefaultOSDescriptionProvider() {
setOSDescriptionProvider(defaultOSDescriptionProvider)
}
func setOSDescriptionProvider(osDescriptionProvider osDescriptionProvider) {
osDescription = osDescriptionProvider
}
type (
osTypeDetector struct{}
osDescriptionDetector struct{}
)
// Detect returns a *Resource that describes the operating system type the
// service is running on.
func (osTypeDetector) Detect(ctx context.Context) (*Resource, error) {
osType := runtimeOS()
osTypeAttribute := mapRuntimeOSToSemconvOSType(osType)
return NewWithAttributes(
semconv.SchemaURL,
osTypeAttribute,
), nil
}
// Detect returns a *Resource that describes the operating system the
// service is running on.
func (osDescriptionDetector) Detect(ctx context.Context) (*Resource, error) {
description, err := osDescription()
if err != nil {
return nil, err
}
return NewWithAttributes(
semconv.SchemaURL,
semconv.OSDescription(description),
), nil
}
// mapRuntimeOSToSemconvOSType translates the OS name as provided by the Go runtime
// into an OS type attribute with the corresponding value defined by the semantic
// conventions. In case the provided OS name isn't mapped, it's transformed to lowercase
// and used as the value for the returned OS type attribute.
func mapRuntimeOSToSemconvOSType(osType string) attribute.KeyValue {
// the elements in this map are the intersection between
// available GOOS values and defined semconv OS types
osTypeAttributeMap := map[string]attribute.KeyValue{
"aix": semconv.OSTypeAIX,
"darwin": semconv.OSTypeDarwin,
"dragonfly": semconv.OSTypeDragonflyBSD,
"freebsd": semconv.OSTypeFreeBSD,
"linux": semconv.OSTypeLinux,
"netbsd": semconv.OSTypeNetBSD,
"openbsd": semconv.OSTypeOpenBSD,
"solaris": semconv.OSTypeSolaris,
"windows": semconv.OSTypeWindows,
"zos": semconv.OSTypeZOS,
}
var osTypeAttribute attribute.KeyValue
if attr, ok := osTypeAttributeMap[osType]; ok {
osTypeAttribute = attr
} else {
osTypeAttribute = semconv.OSTypeKey.String(strings.ToLower(osType))
}
return osTypeAttribute
}

View File

@ -0,0 +1,91 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"encoding/xml"
"fmt"
"io"
"os"
)
type plist struct {
XMLName xml.Name `xml:"plist"`
Dict dict `xml:"dict"`
}
type dict struct {
Key []string `xml:"key"`
String []string `xml:"string"`
}
// osRelease builds a string describing the operating system release based on the
// contents of the property list (.plist) system files. If no .plist files are found,
// or if the required properties to build the release description string are missing,
// an empty string is returned instead. The generated string resembles the output of
// the `sw_vers` commandline program, but in a single-line string. For more information
// about the `sw_vers` program, see: https://www.unix.com/man-page/osx/1/SW_VERS.
func osRelease() string {
file, err := getPlistFile()
if err != nil {
return ""
}
defer file.Close()
values, err := parsePlistFile(file)
if err != nil {
return ""
}
return buildOSRelease(values)
}
// getPlistFile returns a *os.File pointing to one of the well-known .plist files
// available on macOS. If no file can be opened, it returns an error.
func getPlistFile() (*os.File, error) {
return getFirstAvailableFile([]string{
"/System/Library/CoreServices/SystemVersion.plist",
"/System/Library/CoreServices/ServerVersion.plist",
})
}
// parsePlistFile process the file pointed by `file` as a .plist file and returns
// a map with the key-values for each pair of correlated <key> and <string> elements
// contained in it.
func parsePlistFile(file io.Reader) (map[string]string, error) {
var v plist
err := xml.NewDecoder(file).Decode(&v)
if err != nil {
return nil, err
}
if len(v.Dict.Key) != len(v.Dict.String) {
return nil, fmt.Errorf("the number of <key> and <string> elements doesn't match")
}
properties := make(map[string]string, len(v.Dict.Key))
for i, key := range v.Dict.Key {
properties[key] = v.Dict.String[i]
}
return properties, nil
}
// buildOSRelease builds a string describing the OS release based on the properties
// available on the provided map. It tries to find the `ProductName`, `ProductVersion`
// and `ProductBuildVersion` properties. If some of these properties are not found,
// it returns an empty string.
func buildOSRelease(properties map[string]string) string {
productName := properties["ProductName"]
productVersion := properties["ProductVersion"]
productBuildVersion := properties["ProductBuildVersion"]
if productName == "" || productVersion == "" || productBuildVersion == "" {
return ""
}
return fmt.Sprintf("%s %s (%s)", productName, productVersion, productBuildVersion)
}

View File

@ -0,0 +1,143 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:build aix || dragonfly || freebsd || linux || netbsd || openbsd || solaris || zos
// +build aix dragonfly freebsd linux netbsd openbsd solaris zos
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"bufio"
"fmt"
"io"
"os"
"strings"
)
// osRelease builds a string describing the operating system release based on the
// properties of the os-release file. If no os-release file is found, or if the
// required properties to build the release description string are missing, an empty
// string is returned instead. For more information about os-release files, see:
// https://www.freedesktop.org/software/systemd/man/os-release.html
func osRelease() string {
file, err := getOSReleaseFile()
if err != nil {
return ""
}
defer file.Close()
values := parseOSReleaseFile(file)
return buildOSRelease(values)
}
// getOSReleaseFile returns a *os.File pointing to one of the well-known os-release
// files, according to their order of preference. If no file can be opened, it
// returns an error.
func getOSReleaseFile() (*os.File, error) {
return getFirstAvailableFile([]string{"/etc/os-release", "/usr/lib/os-release"})
}
// parseOSReleaseFile process the file pointed by `file` as an os-release file and
// returns a map with the key-values contained in it. Empty lines or lines starting
// with a '#' character are ignored, as well as lines with the missing key=value
// separator. Values are unquoted and unescaped.
func parseOSReleaseFile(file io.Reader) map[string]string {
values := make(map[string]string)
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
if skip(line) {
continue
}
key, value, ok := parse(line)
if ok {
values[key] = value
}
}
return values
}
// skip returns true if the line is blank or starts with a '#' character, and
// therefore should be skipped from processing.
func skip(line string) bool {
line = strings.TrimSpace(line)
return len(line) == 0 || strings.HasPrefix(line, "#")
}
// parse attempts to split the provided line on the first '=' character, and then
// sanitize each side of the split before returning them as a key-value pair.
func parse(line string) (string, string, bool) {
k, v, found := strings.Cut(line, "=")
if !found || len(k) == 0 {
return "", "", false
}
key := strings.TrimSpace(k)
value := unescape(unquote(strings.TrimSpace(v)))
return key, value, true
}
// unquote checks whether the string `s` is quoted with double or single quotes
// and, if so, returns a version of the string without them. Otherwise it returns
// the provided string unchanged.
func unquote(s string) string {
if len(s) < 2 {
return s
}
if (s[0] == '"' || s[0] == '\'') && s[0] == s[len(s)-1] {
return s[1 : len(s)-1]
}
return s
}
// unescape removes the `\` prefix from some characters that are expected
// to have it added in front of them for escaping purposes.
func unescape(s string) string {
return strings.NewReplacer(
`\$`, `$`,
`\"`, `"`,
`\'`, `'`,
`\\`, `\`,
"\\`", "`",
).Replace(s)
}
// buildOSRelease builds a string describing the OS release based on the properties
// available on the provided map. It favors a combination of the `NAME` and `VERSION`
// properties as first option (falling back to `VERSION_ID` if `VERSION` isn't
// found), and using `PRETTY_NAME` alone if some of the previous are not present. If
// none of these properties are found, it returns an empty string.
//
// The rationale behind not using `PRETTY_NAME` as first choice was that, for some
// Linux distributions, it doesn't include the same detail that can be found on the
// individual `NAME` and `VERSION` properties, and combining `PRETTY_NAME` with
// other properties can produce "pretty" redundant strings in some cases.
func buildOSRelease(values map[string]string) string {
var osRelease string
name := values["NAME"]
version := values["VERSION"]
if version == "" {
version = values["VERSION_ID"]
}
if name != "" && version != "" {
osRelease = fmt.Sprintf("%s %s", name, version)
} else {
osRelease = values["PRETTY_NAME"]
}
return osRelease
}

View File

@ -0,0 +1,79 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:build aix || darwin || dragonfly || freebsd || linux || netbsd || openbsd || solaris || zos
// +build aix darwin dragonfly freebsd linux netbsd openbsd solaris zos
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"fmt"
"os"
"golang.org/x/sys/unix"
)
type unameProvider func(buf *unix.Utsname) (err error)
var defaultUnameProvider unameProvider = unix.Uname
var currentUnameProvider = defaultUnameProvider
func setDefaultUnameProvider() {
setUnameProvider(defaultUnameProvider)
}
func setUnameProvider(unameProvider unameProvider) {
currentUnameProvider = unameProvider
}
// platformOSDescription returns a human readable OS version information string.
// The final string combines OS release information (where available) and the
// result of the `uname` system call.
func platformOSDescription() (string, error) {
uname, err := uname()
if err != nil {
return "", err
}
osRelease := osRelease()
if osRelease != "" {
return fmt.Sprintf("%s (%s)", osRelease, uname), nil
}
return uname, nil
}
// uname issues a uname(2) system call (or equivalent on systems which doesn't
// have one) and formats the output in a single string, similar to the output
// of the `uname` commandline program. The final string resembles the one
// obtained with a call to `uname -snrvm`.
func uname() (string, error) {
var utsName unix.Utsname
err := currentUnameProvider(&utsName)
if err != nil {
return "", err
}
return fmt.Sprintf("%s %s %s %s %s",
unix.ByteSliceToString(utsName.Sysname[:]),
unix.ByteSliceToString(utsName.Nodename[:]),
unix.ByteSliceToString(utsName.Release[:]),
unix.ByteSliceToString(utsName.Version[:]),
unix.ByteSliceToString(utsName.Machine[:]),
), nil
}
// getFirstAvailableFile returns an *os.File of the first available
// file from a list of candidate file paths.
func getFirstAvailableFile(candidates []string) (*os.File, error) {
for _, c := range candidates {
file, err := os.Open(c)
if err == nil {
return file, nil
}
}
return nil, fmt.Errorf("no candidate file available: %v", candidates)
}

View File

@ -0,0 +1,15 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:build !aix && !darwin && !dragonfly && !freebsd && !linux && !netbsd && !openbsd && !solaris && !windows && !zos
// +build !aix,!darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris,!windows,!zos
package resource // import "go.opentelemetry.io/otel/sdk/resource"
// platformOSDescription is a placeholder implementation for OSes
// for which this project currently doesn't support os.description
// attribute detection. See build tags declaration early on this file
// for a list of unsupported OSes.
func platformOSDescription() (string, error) {
return "<unknown>", nil
}

View File

@ -0,0 +1,90 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"fmt"
"strconv"
"golang.org/x/sys/windows/registry"
)
// platformOSDescription returns a human readable OS version information string.
// It does so by querying registry values under the
// `SOFTWARE\Microsoft\Windows NT\CurrentVersion` key. The final string
// resembles the one displayed by the Version Reporter Applet (winver.exe).
func platformOSDescription() (string, error) {
k, err := registry.OpenKey(
registry.LOCAL_MACHINE, `SOFTWARE\Microsoft\Windows NT\CurrentVersion`, registry.QUERY_VALUE)
if err != nil {
return "", err
}
defer k.Close()
var (
productName = readProductName(k)
displayVersion = readDisplayVersion(k)
releaseID = readReleaseID(k)
currentMajorVersionNumber = readCurrentMajorVersionNumber(k)
currentMinorVersionNumber = readCurrentMinorVersionNumber(k)
currentBuildNumber = readCurrentBuildNumber(k)
ubr = readUBR(k)
)
if displayVersion != "" {
displayVersion += " "
}
return fmt.Sprintf("%s %s(%s) [Version %s.%s.%s.%s]",
productName,
displayVersion,
releaseID,
currentMajorVersionNumber,
currentMinorVersionNumber,
currentBuildNumber,
ubr,
), nil
}
func getStringValue(name string, k registry.Key) string {
value, _, _ := k.GetStringValue(name)
return value
}
func getIntegerValue(name string, k registry.Key) uint64 {
value, _, _ := k.GetIntegerValue(name)
return value
}
func readProductName(k registry.Key) string {
return getStringValue("ProductName", k)
}
func readDisplayVersion(k registry.Key) string {
return getStringValue("DisplayVersion", k)
}
func readReleaseID(k registry.Key) string {
return getStringValue("ReleaseID", k)
}
func readCurrentMajorVersionNumber(k registry.Key) string {
return strconv.FormatUint(getIntegerValue("CurrentMajorVersionNumber", k), 10)
}
func readCurrentMinorVersionNumber(k registry.Key) string {
return strconv.FormatUint(getIntegerValue("CurrentMinorVersionNumber", k), 10)
}
func readCurrentBuildNumber(k registry.Key) string {
return getStringValue("CurrentBuildNumber", k)
}
func readUBR(k registry.Key) string {
return strconv.FormatUint(getIntegerValue("UBR", k), 10)
}

173
vendor/go.opentelemetry.io/otel/sdk/resource/process.go generated vendored Normal file
View File

@ -0,0 +1,173 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"fmt"
"os"
"os/user"
"path/filepath"
"runtime"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)
type (
pidProvider func() int
executablePathProvider func() (string, error)
commandArgsProvider func() []string
ownerProvider func() (*user.User, error)
runtimeNameProvider func() string
runtimeVersionProvider func() string
runtimeOSProvider func() string
runtimeArchProvider func() string
)
var (
defaultPidProvider pidProvider = os.Getpid
defaultExecutablePathProvider executablePathProvider = os.Executable
defaultCommandArgsProvider commandArgsProvider = func() []string { return os.Args }
defaultOwnerProvider ownerProvider = user.Current
defaultRuntimeNameProvider runtimeNameProvider = func() string {
if runtime.Compiler == "gc" {
return "go"
}
return runtime.Compiler
}
defaultRuntimeVersionProvider runtimeVersionProvider = runtime.Version
defaultRuntimeOSProvider runtimeOSProvider = func() string { return runtime.GOOS }
defaultRuntimeArchProvider runtimeArchProvider = func() string { return runtime.GOARCH }
)
var (
pid = defaultPidProvider
executablePath = defaultExecutablePathProvider
commandArgs = defaultCommandArgsProvider
owner = defaultOwnerProvider
runtimeName = defaultRuntimeNameProvider
runtimeVersion = defaultRuntimeVersionProvider
runtimeOS = defaultRuntimeOSProvider
runtimeArch = defaultRuntimeArchProvider
)
func setDefaultOSProviders() {
setOSProviders(
defaultPidProvider,
defaultExecutablePathProvider,
defaultCommandArgsProvider,
)
}
func setOSProviders(
pidProvider pidProvider,
executablePathProvider executablePathProvider,
commandArgsProvider commandArgsProvider,
) {
pid = pidProvider
executablePath = executablePathProvider
commandArgs = commandArgsProvider
}
func setDefaultRuntimeProviders() {
setRuntimeProviders(
defaultRuntimeNameProvider,
defaultRuntimeVersionProvider,
defaultRuntimeOSProvider,
defaultRuntimeArchProvider,
)
}
func setRuntimeProviders(
runtimeNameProvider runtimeNameProvider,
runtimeVersionProvider runtimeVersionProvider,
runtimeOSProvider runtimeOSProvider,
runtimeArchProvider runtimeArchProvider,
) {
runtimeName = runtimeNameProvider
runtimeVersion = runtimeVersionProvider
runtimeOS = runtimeOSProvider
runtimeArch = runtimeArchProvider
}
func setDefaultUserProviders() {
setUserProviders(defaultOwnerProvider)
}
func setUserProviders(ownerProvider ownerProvider) {
owner = ownerProvider
}
type (
processPIDDetector struct{}
processExecutableNameDetector struct{}
processExecutablePathDetector struct{}
processCommandArgsDetector struct{}
processOwnerDetector struct{}
processRuntimeNameDetector struct{}
processRuntimeVersionDetector struct{}
processRuntimeDescriptionDetector struct{}
)
// Detect returns a *Resource that describes the process identifier (PID) of the
// executing process.
func (processPIDDetector) Detect(ctx context.Context) (*Resource, error) {
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessPID(pid())), nil
}
// Detect returns a *Resource that describes the name of the process executable.
func (processExecutableNameDetector) Detect(ctx context.Context) (*Resource, error) {
executableName := filepath.Base(commandArgs()[0])
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessExecutableName(executableName)), nil
}
// Detect returns a *Resource that describes the full path of the process executable.
func (processExecutablePathDetector) Detect(ctx context.Context) (*Resource, error) {
executablePath, err := executablePath()
if err != nil {
return nil, err
}
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessExecutablePath(executablePath)), nil
}
// Detect returns a *Resource that describes all the command arguments as received
// by the process.
func (processCommandArgsDetector) Detect(ctx context.Context) (*Resource, error) {
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessCommandArgs(commandArgs()...)), nil
}
// Detect returns a *Resource that describes the username of the user that owns the
// process.
func (processOwnerDetector) Detect(ctx context.Context) (*Resource, error) {
owner, err := owner()
if err != nil {
return nil, err
}
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessOwner(owner.Username)), nil
}
// Detect returns a *Resource that describes the name of the compiler used to compile
// this process image.
func (processRuntimeNameDetector) Detect(ctx context.Context) (*Resource, error) {
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessRuntimeName(runtimeName())), nil
}
// Detect returns a *Resource that describes the version of the runtime of this process.
func (processRuntimeVersionDetector) Detect(ctx context.Context) (*Resource, error) {
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessRuntimeVersion(runtimeVersion())), nil
}
// Detect returns a *Resource that describes the runtime of this process.
func (processRuntimeDescriptionDetector) Detect(ctx context.Context) (*Resource, error) {
runtimeDescription := fmt.Sprintf(
"go version %s %s/%s", runtimeVersion(), runtimeOS(), runtimeArch())
return NewWithAttributes(
semconv.SchemaURL,
semconv.ProcessRuntimeDescription(runtimeDescription),
), nil
}

View File

@ -0,0 +1,294 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"errors"
"fmt"
"sync"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/internal/x"
)
// Resource describes an entity about which identifying information
// and metadata is exposed. Resource is an immutable object,
// equivalent to a map from key to unique value.
//
// Resources should be passed and stored as pointers
// (`*resource.Resource`). The `nil` value is equivalent to an empty
// Resource.
type Resource struct {
attrs attribute.Set
schemaURL string
}
var (
defaultResource *Resource
defaultResourceOnce sync.Once
)
// ErrSchemaURLConflict is an error returned when two Resources are merged
// together that contain different, non-empty, schema URLs.
var ErrSchemaURLConflict = errors.New("conflicting Schema URL")
// New returns a [Resource] built using opts.
//
// This may return a partial Resource along with an error containing
// [ErrPartialResource] if options that provide a [Detector] are used and that
// error is returned from one or more of the Detectors. It may also return a
// merge-conflict Resource along with an error containing
// [ErrSchemaURLConflict] if merging Resources from the opts results in a
// schema URL conflict (see [Resource.Merge] for more information). It is up to
// the caller to determine if this returned Resource should be used or not
// based on these errors.
func New(ctx context.Context, opts ...Option) (*Resource, error) {
cfg := config{}
for _, opt := range opts {
cfg = opt.apply(cfg)
}
r := &Resource{schemaURL: cfg.schemaURL}
return r, detect(ctx, r, cfg.detectors)
}
// NewWithAttributes creates a resource from attrs and associates the resource with a
// schema URL. If attrs contains duplicate keys, the last value will be used. If attrs
// contains any invalid items those items will be dropped. The attrs are assumed to be
// in a schema identified by schemaURL.
func NewWithAttributes(schemaURL string, attrs ...attribute.KeyValue) *Resource {
resource := NewSchemaless(attrs...)
resource.schemaURL = schemaURL
return resource
}
// NewSchemaless creates a resource from attrs. If attrs contains duplicate keys,
// the last value will be used. If attrs contains any invalid items those items will
// be dropped. The resource will not be associated with a schema URL. If the schema
// of the attrs is known use NewWithAttributes instead.
func NewSchemaless(attrs ...attribute.KeyValue) *Resource {
if len(attrs) == 0 {
return &Resource{}
}
// Ensure attributes comply with the specification:
// https://github.com/open-telemetry/opentelemetry-specification/blob/v1.20.0/specification/common/README.md#attribute
s, _ := attribute.NewSetWithFiltered(attrs, func(kv attribute.KeyValue) bool {
return kv.Valid()
})
// If attrs only contains invalid entries do not allocate a new resource.
if s.Len() == 0 {
return &Resource{}
}
return &Resource{attrs: s} //nolint
}
// String implements the Stringer interface and provides a
// human-readable form of the resource.
//
// Avoid using this representation as the key in a map of resources,
// use Equivalent() as the key instead.
func (r *Resource) String() string {
if r == nil {
return ""
}
return r.attrs.Encoded(attribute.DefaultEncoder())
}
// MarshalLog is the marshaling function used by the logging system to represent this Resource.
func (r *Resource) MarshalLog() interface{} {
return struct {
Attributes attribute.Set
SchemaURL string
}{
Attributes: r.attrs,
SchemaURL: r.schemaURL,
}
}
// Attributes returns a copy of attributes from the resource in a sorted order.
// To avoid allocating a new slice, use an iterator.
func (r *Resource) Attributes() []attribute.KeyValue {
if r == nil {
r = Empty()
}
return r.attrs.ToSlice()
}
// SchemaURL returns the schema URL associated with Resource r.
func (r *Resource) SchemaURL() string {
if r == nil {
return ""
}
return r.schemaURL
}
// Iter returns an iterator of the Resource attributes.
// This is ideal to use if you do not want a copy of the attributes.
func (r *Resource) Iter() attribute.Iterator {
if r == nil {
r = Empty()
}
return r.attrs.Iter()
}
// Equal returns true when a Resource is equivalent to this Resource.
func (r *Resource) Equal(eq *Resource) bool {
if r == nil {
r = Empty()
}
if eq == nil {
eq = Empty()
}
return r.Equivalent() == eq.Equivalent()
}
// Merge creates a new [Resource] by merging a and b.
//
// If there are common keys between a and b, then the value from b will
// overwrite the value from a, even if b's value is empty.
//
// The SchemaURL of the resources will be merged according to the
// [OpenTelemetry specification rules]:
//
// - If a's schema URL is empty then the returned Resource's schema URL will
// be set to the schema URL of b,
// - Else if b's schema URL is empty then the returned Resource's schema URL
// will be set to the schema URL of a,
// - Else if the schema URLs of a and b are the same then that will be the
// schema URL of the returned Resource,
// - Else this is a merging error. If the resources have different,
// non-empty, schema URLs an error containing [ErrSchemaURLConflict] will
// be returned with the merged Resource. The merged Resource will have an
// empty schema URL. It may be the case that some unintended attributes
// have been overwritten or old semantic conventions persisted in the
// returned Resource. It is up to the caller to determine if this returned
// Resource should be used or not.
//
// [OpenTelemetry specification rules]: https://github.com/open-telemetry/opentelemetry-specification/blob/v1.20.0/specification/resource/sdk.md#merge
func Merge(a, b *Resource) (*Resource, error) {
if a == nil && b == nil {
return Empty(), nil
}
if a == nil {
return b, nil
}
if b == nil {
return a, nil
}
// Note: 'b' attributes will overwrite 'a' with last-value-wins in attribute.Key()
// Meaning this is equivalent to: append(a.Attributes(), b.Attributes()...)
mi := attribute.NewMergeIterator(b.Set(), a.Set())
combine := make([]attribute.KeyValue, 0, a.Len()+b.Len())
for mi.Next() {
combine = append(combine, mi.Attribute())
}
switch {
case a.schemaURL == "":
return NewWithAttributes(b.schemaURL, combine...), nil
case b.schemaURL == "":
return NewWithAttributes(a.schemaURL, combine...), nil
case a.schemaURL == b.schemaURL:
return NewWithAttributes(a.schemaURL, combine...), nil
}
// Return the merged resource with an appropriate error. It is up to
// the user to decide if the returned resource can be used or not.
return NewSchemaless(combine...), fmt.Errorf(
"%w: %s and %s",
ErrSchemaURLConflict,
a.schemaURL,
b.schemaURL,
)
}
// Empty returns an instance of Resource with no attributes. It is
// equivalent to a `nil` Resource.
func Empty() *Resource {
return &Resource{}
}
// Default returns an instance of Resource with a default
// "service.name" and OpenTelemetrySDK attributes.
func Default() *Resource {
defaultResourceOnce.Do(func() {
var err error
defaultDetectors := []Detector{
defaultServiceNameDetector{},
fromEnv{},
telemetrySDK{},
}
if x.Resource.Enabled() {
defaultDetectors = append([]Detector{defaultServiceInstanceIDDetector{}}, defaultDetectors...)
}
defaultResource, err = Detect(
context.Background(),
defaultDetectors...,
)
if err != nil {
otel.Handle(err)
}
// If Detect did not return a valid resource, fall back to emptyResource.
if defaultResource == nil {
defaultResource = &Resource{}
}
})
return defaultResource
}
// Environment returns an instance of Resource with attributes
// extracted from the OTEL_RESOURCE_ATTRIBUTES environment variable.
func Environment() *Resource {
detector := &fromEnv{}
resource, err := detector.Detect(context.Background())
if err != nil {
otel.Handle(err)
}
return resource
}
// Equivalent returns an object that can be compared for equality
// between two resources. This value is suitable for use as a key in
// a map.
func (r *Resource) Equivalent() attribute.Distinct {
return r.Set().Equivalent()
}
// Set returns the equivalent *attribute.Set of this resource's attributes.
func (r *Resource) Set() *attribute.Set {
if r == nil {
r = Empty()
}
return &r.attrs
}
// MarshalJSON encodes the resource attributes as a JSON list of { "Key":
// "...", "Value": ... } pairs in order sorted by key.
func (r *Resource) MarshalJSON() ([]byte, error) {
if r == nil {
r = Empty()
}
return r.attrs.MarshalJSON()
}
// Len returns the number of unique key-values in this Resource.
func (r *Resource) Len() int {
if r == nil {
return 0
}
return r.attrs.Len()
}
// Encoded returns an encoded representation of the resource.
func (r *Resource) Encoded(enc attribute.Encoder) string {
if r == nil {
return ""
}
return r.attrs.Encoded(enc)
}

3
vendor/go.opentelemetry.io/otel/sdk/trace/README.md generated vendored Normal file
View File

@ -0,0 +1,3 @@
# SDK Trace
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/sdk/trace)](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/trace)

View File

@ -0,0 +1,409 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"sync"
"sync/atomic"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/sdk/internal/env"
"go.opentelemetry.io/otel/trace"
)
// Defaults for BatchSpanProcessorOptions.
const (
DefaultMaxQueueSize = 2048
DefaultScheduleDelay = 5000
DefaultExportTimeout = 30000
DefaultMaxExportBatchSize = 512
)
// BatchSpanProcessorOption configures a BatchSpanProcessor.
type BatchSpanProcessorOption func(o *BatchSpanProcessorOptions)
// BatchSpanProcessorOptions is configuration settings for a
// BatchSpanProcessor.
type BatchSpanProcessorOptions struct {
// MaxQueueSize is the maximum queue size to buffer spans for delayed processing. If the
// queue gets full it drops the spans. Use BlockOnQueueFull to change this behavior.
// The default value of MaxQueueSize is 2048.
MaxQueueSize int
// BatchTimeout is the maximum duration for constructing a batch. Processor
// forcefully sends available spans when timeout is reached.
// The default value of BatchTimeout is 5000 msec.
BatchTimeout time.Duration
// ExportTimeout specifies the maximum duration for exporting spans. If the timeout
// is reached, the export will be cancelled.
// The default value of ExportTimeout is 30000 msec.
ExportTimeout time.Duration
// MaxExportBatchSize is the maximum number of spans to process in a single batch.
// If there are more than one batch worth of spans then it processes multiple batches
// of spans one batch after the other without any delay.
// The default value of MaxExportBatchSize is 512.
MaxExportBatchSize int
// BlockOnQueueFull blocks onEnd() and onStart() method if the queue is full
// AND if BlockOnQueueFull is set to true.
// Blocking option should be used carefully as it can severely affect the performance of an
// application.
BlockOnQueueFull bool
}
// batchSpanProcessor is a SpanProcessor that batches asynchronously-received
// spans and sends them to a trace.Exporter when complete.
type batchSpanProcessor struct {
e SpanExporter
o BatchSpanProcessorOptions
queue chan ReadOnlySpan
dropped uint32
batch []ReadOnlySpan
batchMutex sync.Mutex
timer *time.Timer
stopWait sync.WaitGroup
stopOnce sync.Once
stopCh chan struct{}
stopped atomic.Bool
}
var _ SpanProcessor = (*batchSpanProcessor)(nil)
// NewBatchSpanProcessor creates a new SpanProcessor that will send completed
// span batches to the exporter with the supplied options.
//
// If the exporter is nil, the span processor will perform no action.
func NewBatchSpanProcessor(exporter SpanExporter, options ...BatchSpanProcessorOption) SpanProcessor {
maxQueueSize := env.BatchSpanProcessorMaxQueueSize(DefaultMaxQueueSize)
maxExportBatchSize := env.BatchSpanProcessorMaxExportBatchSize(DefaultMaxExportBatchSize)
if maxExportBatchSize > maxQueueSize {
if DefaultMaxExportBatchSize > maxQueueSize {
maxExportBatchSize = maxQueueSize
} else {
maxExportBatchSize = DefaultMaxExportBatchSize
}
}
o := BatchSpanProcessorOptions{
BatchTimeout: time.Duration(env.BatchSpanProcessorScheduleDelay(DefaultScheduleDelay)) * time.Millisecond,
ExportTimeout: time.Duration(env.BatchSpanProcessorExportTimeout(DefaultExportTimeout)) * time.Millisecond,
MaxQueueSize: maxQueueSize,
MaxExportBatchSize: maxExportBatchSize,
}
for _, opt := range options {
opt(&o)
}
bsp := &batchSpanProcessor{
e: exporter,
o: o,
batch: make([]ReadOnlySpan, 0, o.MaxExportBatchSize),
timer: time.NewTimer(o.BatchTimeout),
queue: make(chan ReadOnlySpan, o.MaxQueueSize),
stopCh: make(chan struct{}),
}
bsp.stopWait.Add(1)
go func() {
defer bsp.stopWait.Done()
bsp.processQueue()
bsp.drainQueue()
}()
return bsp
}
// OnStart method does nothing.
func (bsp *batchSpanProcessor) OnStart(parent context.Context, s ReadWriteSpan) {}
// OnEnd method enqueues a ReadOnlySpan for later processing.
func (bsp *batchSpanProcessor) OnEnd(s ReadOnlySpan) {
// Do not enqueue spans after Shutdown.
if bsp.stopped.Load() {
return
}
// Do not enqueue spans if we are just going to drop them.
if bsp.e == nil {
return
}
bsp.enqueue(s)
}
// Shutdown flushes the queue and waits until all spans are processed.
// It only executes once. Subsequent call does nothing.
func (bsp *batchSpanProcessor) Shutdown(ctx context.Context) error {
var err error
bsp.stopOnce.Do(func() {
bsp.stopped.Store(true)
wait := make(chan struct{})
go func() {
close(bsp.stopCh)
bsp.stopWait.Wait()
if bsp.e != nil {
if err := bsp.e.Shutdown(ctx); err != nil {
otel.Handle(err)
}
}
close(wait)
}()
// Wait until the wait group is done or the context is cancelled
select {
case <-wait:
case <-ctx.Done():
err = ctx.Err()
}
})
return err
}
type forceFlushSpan struct {
ReadOnlySpan
flushed chan struct{}
}
func (f forceFlushSpan) SpanContext() trace.SpanContext {
return trace.NewSpanContext(trace.SpanContextConfig{TraceFlags: trace.FlagsSampled})
}
// ForceFlush exports all ended spans that have not yet been exported.
func (bsp *batchSpanProcessor) ForceFlush(ctx context.Context) error {
// Interrupt if context is already canceled.
if err := ctx.Err(); err != nil {
return err
}
// Do nothing after Shutdown.
if bsp.stopped.Load() {
return nil
}
var err error
if bsp.e != nil {
flushCh := make(chan struct{})
if bsp.enqueueBlockOnQueueFull(ctx, forceFlushSpan{flushed: flushCh}) {
select {
case <-bsp.stopCh:
// The batchSpanProcessor is Shutdown.
return nil
case <-flushCh:
// Processed any items in queue prior to ForceFlush being called
case <-ctx.Done():
return ctx.Err()
}
}
wait := make(chan error)
go func() {
wait <- bsp.exportSpans(ctx)
close(wait)
}()
// Wait until the export is finished or the context is cancelled/timed out
select {
case err = <-wait:
case <-ctx.Done():
err = ctx.Err()
}
}
return err
}
// WithMaxQueueSize returns a BatchSpanProcessorOption that configures the
// maximum queue size allowed for a BatchSpanProcessor.
func WithMaxQueueSize(size int) BatchSpanProcessorOption {
return func(o *BatchSpanProcessorOptions) {
o.MaxQueueSize = size
}
}
// WithMaxExportBatchSize returns a BatchSpanProcessorOption that configures
// the maximum export batch size allowed for a BatchSpanProcessor.
func WithMaxExportBatchSize(size int) BatchSpanProcessorOption {
return func(o *BatchSpanProcessorOptions) {
o.MaxExportBatchSize = size
}
}
// WithBatchTimeout returns a BatchSpanProcessorOption that configures the
// maximum delay allowed for a BatchSpanProcessor before it will export any
// held span (whether the queue is full or not).
func WithBatchTimeout(delay time.Duration) BatchSpanProcessorOption {
return func(o *BatchSpanProcessorOptions) {
o.BatchTimeout = delay
}
}
// WithExportTimeout returns a BatchSpanProcessorOption that configures the
// amount of time a BatchSpanProcessor waits for an exporter to export before
// abandoning the export.
func WithExportTimeout(timeout time.Duration) BatchSpanProcessorOption {
return func(o *BatchSpanProcessorOptions) {
o.ExportTimeout = timeout
}
}
// WithBlocking returns a BatchSpanProcessorOption that configures a
// BatchSpanProcessor to wait for enqueue operations to succeed instead of
// dropping data when the queue is full.
func WithBlocking() BatchSpanProcessorOption {
return func(o *BatchSpanProcessorOptions) {
o.BlockOnQueueFull = true
}
}
// exportSpans is a subroutine of processing and draining the queue.
func (bsp *batchSpanProcessor) exportSpans(ctx context.Context) error {
bsp.timer.Reset(bsp.o.BatchTimeout)
bsp.batchMutex.Lock()
defer bsp.batchMutex.Unlock()
if bsp.o.ExportTimeout > 0 {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, bsp.o.ExportTimeout)
defer cancel()
}
if l := len(bsp.batch); l > 0 {
global.Debug("exporting spans", "count", len(bsp.batch), "total_dropped", atomic.LoadUint32(&bsp.dropped))
err := bsp.e.ExportSpans(ctx, bsp.batch)
// A new batch is always created after exporting, even if the batch failed to be exported.
//
// It is up to the exporter to implement any type of retry logic if a batch is failing
// to be exported, since it is specific to the protocol and backend being sent to.
bsp.batch = bsp.batch[:0]
if err != nil {
return err
}
}
return nil
}
// processQueue removes spans from the `queue` channel until processor
// is shut down. It calls the exporter in batches of up to MaxExportBatchSize
// waiting up to BatchTimeout to form a batch.
func (bsp *batchSpanProcessor) processQueue() {
defer bsp.timer.Stop()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
for {
select {
case <-bsp.stopCh:
return
case <-bsp.timer.C:
if err := bsp.exportSpans(ctx); err != nil {
otel.Handle(err)
}
case sd := <-bsp.queue:
if ffs, ok := sd.(forceFlushSpan); ok {
close(ffs.flushed)
continue
}
bsp.batchMutex.Lock()
bsp.batch = append(bsp.batch, sd)
shouldExport := len(bsp.batch) >= bsp.o.MaxExportBatchSize
bsp.batchMutex.Unlock()
if shouldExport {
if !bsp.timer.Stop() {
<-bsp.timer.C
}
if err := bsp.exportSpans(ctx); err != nil {
otel.Handle(err)
}
}
}
}
}
// drainQueue awaits the any caller that had added to bsp.stopWait
// to finish the enqueue, then exports the final batch.
func (bsp *batchSpanProcessor) drainQueue() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
for {
select {
case sd := <-bsp.queue:
if _, ok := sd.(forceFlushSpan); ok {
// Ignore flush requests as they are not valid spans.
continue
}
bsp.batchMutex.Lock()
bsp.batch = append(bsp.batch, sd)
shouldExport := len(bsp.batch) == bsp.o.MaxExportBatchSize
bsp.batchMutex.Unlock()
if shouldExport {
if err := bsp.exportSpans(ctx); err != nil {
otel.Handle(err)
}
}
default:
// There are no more enqueued spans. Make final export.
if err := bsp.exportSpans(ctx); err != nil {
otel.Handle(err)
}
return
}
}
}
func (bsp *batchSpanProcessor) enqueue(sd ReadOnlySpan) {
ctx := context.TODO()
if bsp.o.BlockOnQueueFull {
bsp.enqueueBlockOnQueueFull(ctx, sd)
} else {
bsp.enqueueDrop(ctx, sd)
}
}
func (bsp *batchSpanProcessor) enqueueBlockOnQueueFull(ctx context.Context, sd ReadOnlySpan) bool {
if !sd.SpanContext().IsSampled() {
return false
}
select {
case bsp.queue <- sd:
return true
case <-ctx.Done():
return false
}
}
func (bsp *batchSpanProcessor) enqueueDrop(_ context.Context, sd ReadOnlySpan) bool {
if !sd.SpanContext().IsSampled() {
return false
}
select {
case bsp.queue <- sd:
return true
default:
atomic.AddUint32(&bsp.dropped, 1)
}
return false
}
// MarshalLog is the marshaling function used by the logging system to represent this Span Processor.
func (bsp *batchSpanProcessor) MarshalLog() interface{} {
return struct {
Type string
SpanExporter SpanExporter
Config BatchSpanProcessorOptions
}{
Type: "BatchSpanProcessor",
SpanExporter: bsp.e,
Config: bsp.o,
}
}

10
vendor/go.opentelemetry.io/otel/sdk/trace/doc.go generated vendored Normal file
View File

@ -0,0 +1,10 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package trace contains support for OpenTelemetry distributed tracing.
The following assumes a basic familiarity with OpenTelemetry concepts.
See https://opentelemetry.io.
*/
package trace // import "go.opentelemetry.io/otel/sdk/trace"

26
vendor/go.opentelemetry.io/otel/sdk/trace/event.go generated vendored Normal file
View File

@ -0,0 +1,26 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"time"
"go.opentelemetry.io/otel/attribute"
)
// Event is a thing that happened during a Span's lifetime.
type Event struct {
// Name is the name of this event
Name string
// Attributes describe the aspects of the event.
Attributes []attribute.KeyValue
// DroppedAttributeCount is the number of attributes that were not
// recorded due to configured limits being reached.
DroppedAttributeCount int
// Time at which this event was recorded.
Time time.Time
}

View File

@ -0,0 +1,59 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"slices"
"sync"
"go.opentelemetry.io/otel/internal/global"
)
// evictedQueue is a FIFO queue with a configurable capacity.
type evictedQueue[T any] struct {
queue []T
capacity int
droppedCount int
logDropped func()
}
func newEvictedQueueEvent(capacity int) evictedQueue[Event] {
// Do not pre-allocate queue, do this lazily.
return evictedQueue[Event]{
capacity: capacity,
logDropped: sync.OnceFunc(func() { global.Warn("limit reached: dropping trace trace.Event") }),
}
}
func newEvictedQueueLink(capacity int) evictedQueue[Link] {
// Do not pre-allocate queue, do this lazily.
return evictedQueue[Link]{
capacity: capacity,
logDropped: sync.OnceFunc(func() { global.Warn("limit reached: dropping trace trace.Link") }),
}
}
// add adds value to the evictedQueue eq. If eq is at capacity, the oldest
// queued value will be discarded and the drop count incremented.
func (eq *evictedQueue[T]) add(value T) {
if eq.capacity == 0 {
eq.droppedCount++
eq.logDropped()
return
}
if eq.capacity > 0 && len(eq.queue) == eq.capacity {
// Drop first-in while avoiding allocating more capacity to eq.queue.
copy(eq.queue[:eq.capacity-1], eq.queue[1:])
eq.queue = eq.queue[:eq.capacity-1]
eq.droppedCount++
eq.logDropped()
}
eq.queue = append(eq.queue, value)
}
// copy returns a copy of the evictedQueue.
func (eq *evictedQueue[T]) copy() []T {
return slices.Clone(eq.queue)
}

View File

@ -0,0 +1,81 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
crand "crypto/rand"
"encoding/binary"
"math/rand"
"sync"
"go.opentelemetry.io/otel/trace"
)
// IDGenerator allows custom generators for TraceID and SpanID.
type IDGenerator interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// NewIDs returns a new trace and span ID.
NewIDs(ctx context.Context) (trace.TraceID, trace.SpanID)
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// NewSpanID returns a ID for a new span in the trace with traceID.
NewSpanID(ctx context.Context, traceID trace.TraceID) trace.SpanID
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
type randomIDGenerator struct {
sync.Mutex
randSource *rand.Rand
}
var _ IDGenerator = &randomIDGenerator{}
// NewSpanID returns a non-zero span ID from a randomly-chosen sequence.
func (gen *randomIDGenerator) NewSpanID(ctx context.Context, traceID trace.TraceID) trace.SpanID {
gen.Lock()
defer gen.Unlock()
sid := trace.SpanID{}
for {
_, _ = gen.randSource.Read(sid[:])
if sid.IsValid() {
break
}
}
return sid
}
// NewIDs returns a non-zero trace ID and a non-zero span ID from a
// randomly-chosen sequence.
func (gen *randomIDGenerator) NewIDs(ctx context.Context) (trace.TraceID, trace.SpanID) {
gen.Lock()
defer gen.Unlock()
tid := trace.TraceID{}
sid := trace.SpanID{}
for {
_, _ = gen.randSource.Read(tid[:])
if tid.IsValid() {
break
}
}
for {
_, _ = gen.randSource.Read(sid[:])
if sid.IsValid() {
break
}
}
return tid, sid
}
func defaultIDGenerator() IDGenerator {
gen := &randomIDGenerator{}
var rngSeed int64
_ = binary.Read(crand.Reader, binary.LittleEndian, &rngSeed)
gen.randSource = rand.New(rand.NewSource(rngSeed))
return gen
}

23
vendor/go.opentelemetry.io/otel/sdk/trace/link.go generated vendored Normal file
View File

@ -0,0 +1,23 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
// Link is the relationship between two Spans. The relationship can be within
// the same Trace or across different Traces.
type Link struct {
// SpanContext of the linked Span.
SpanContext trace.SpanContext
// Attributes describe the aspects of the link.
Attributes []attribute.KeyValue
// DroppedAttributeCount is the number of attributes that were not
// recorded due to configured limits being reached.
DroppedAttributeCount int
}

493
vendor/go.opentelemetry.io/otel/sdk/trace/provider.go generated vendored Normal file
View File

@ -0,0 +1,493 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"fmt"
"sync"
"sync/atomic"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/sdk/instrumentation"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/otel/trace/embedded"
"go.opentelemetry.io/otel/trace/noop"
)
const (
defaultTracerName = "go.opentelemetry.io/otel/sdk/tracer"
)
// tracerProviderConfig.
type tracerProviderConfig struct {
// processors contains collection of SpanProcessors that are processing pipeline
// for spans in the trace signal.
// SpanProcessors registered with a TracerProvider and are called at the start
// and end of a Span's lifecycle, and are called in the order they are
// registered.
processors []SpanProcessor
// sampler is the default sampler used when creating new spans.
sampler Sampler
// idGenerator is used to generate all Span and Trace IDs when needed.
idGenerator IDGenerator
// spanLimits defines the attribute, event, and link limits for spans.
spanLimits SpanLimits
// resource contains attributes representing an entity that produces telemetry.
resource *resource.Resource
}
// MarshalLog is the marshaling function used by the logging system to represent this Provider.
func (cfg tracerProviderConfig) MarshalLog() interface{} {
return struct {
SpanProcessors []SpanProcessor
SamplerType string
IDGeneratorType string
SpanLimits SpanLimits
Resource *resource.Resource
}{
SpanProcessors: cfg.processors,
SamplerType: fmt.Sprintf("%T", cfg.sampler),
IDGeneratorType: fmt.Sprintf("%T", cfg.idGenerator),
SpanLimits: cfg.spanLimits,
Resource: cfg.resource,
}
}
// TracerProvider is an OpenTelemetry TracerProvider. It provides Tracers to
// instrumentation so it can trace operational flow through a system.
type TracerProvider struct {
embedded.TracerProvider
mu sync.Mutex
namedTracer map[instrumentation.Scope]*tracer
spanProcessors atomic.Pointer[spanProcessorStates]
isShutdown atomic.Bool
// These fields are not protected by the lock mu. They are assumed to be
// immutable after creation of the TracerProvider.
sampler Sampler
idGenerator IDGenerator
spanLimits SpanLimits
resource *resource.Resource
}
var _ trace.TracerProvider = &TracerProvider{}
// NewTracerProvider returns a new and configured TracerProvider.
//
// By default the returned TracerProvider is configured with:
// - a ParentBased(AlwaysSample) Sampler
// - a random number IDGenerator
// - the resource.Default() Resource
// - the default SpanLimits.
//
// The passed opts are used to override these default values and configure the
// returned TracerProvider appropriately.
func NewTracerProvider(opts ...TracerProviderOption) *TracerProvider {
o := tracerProviderConfig{
spanLimits: NewSpanLimits(),
}
o = applyTracerProviderEnvConfigs(o)
for _, opt := range opts {
o = opt.apply(o)
}
o = ensureValidTracerProviderConfig(o)
tp := &TracerProvider{
namedTracer: make(map[instrumentation.Scope]*tracer),
sampler: o.sampler,
idGenerator: o.idGenerator,
spanLimits: o.spanLimits,
resource: o.resource,
}
global.Info("TracerProvider created", "config", o)
spss := make(spanProcessorStates, 0, len(o.processors))
for _, sp := range o.processors {
spss = append(spss, newSpanProcessorState(sp))
}
tp.spanProcessors.Store(&spss)
return tp
}
// Tracer returns a Tracer with the given name and options. If a Tracer for
// the given name and options does not exist it is created, otherwise the
// existing Tracer is returned.
//
// If name is empty, DefaultTracerName is used instead.
//
// This method is safe to be called concurrently.
func (p *TracerProvider) Tracer(name string, opts ...trace.TracerOption) trace.Tracer {
// This check happens before the mutex is acquired to avoid deadlocking if Tracer() is called from within Shutdown().
if p.isShutdown.Load() {
return noop.NewTracerProvider().Tracer(name, opts...)
}
c := trace.NewTracerConfig(opts...)
if name == "" {
name = defaultTracerName
}
is := instrumentation.Scope{
Name: name,
Version: c.InstrumentationVersion(),
SchemaURL: c.SchemaURL(),
}
t, ok := func() (trace.Tracer, bool) {
p.mu.Lock()
defer p.mu.Unlock()
// Must check the flag after acquiring the mutex to avoid returning a valid tracer if Shutdown() ran
// after the first check above but before we acquired the mutex.
if p.isShutdown.Load() {
return noop.NewTracerProvider().Tracer(name, opts...), true
}
t, ok := p.namedTracer[is]
if !ok {
t = &tracer{
provider: p,
instrumentationScope: is,
}
p.namedTracer[is] = t
}
return t, ok
}()
if !ok {
// This code is outside the mutex to not hold the lock while calling third party logging code:
// - That code may do slow things like I/O, which would prolong the duration the lock is held,
// slowing down all tracing consumers.
// - Logging code may be instrumented with tracing and deadlock because it could try
// acquiring the same non-reentrant mutex.
global.Info("Tracer created", "name", name, "version", is.Version, "schemaURL", is.SchemaURL)
}
return t
}
// RegisterSpanProcessor adds the given SpanProcessor to the list of SpanProcessors.
func (p *TracerProvider) RegisterSpanProcessor(sp SpanProcessor) {
// This check prevents calls during a shutdown.
if p.isShutdown.Load() {
return
}
p.mu.Lock()
defer p.mu.Unlock()
// This check prevents calls after a shutdown.
if p.isShutdown.Load() {
return
}
current := p.getSpanProcessors()
newSPS := make(spanProcessorStates, 0, len(current)+1)
newSPS = append(newSPS, current...)
newSPS = append(newSPS, newSpanProcessorState(sp))
p.spanProcessors.Store(&newSPS)
}
// UnregisterSpanProcessor removes the given SpanProcessor from the list of SpanProcessors.
func (p *TracerProvider) UnregisterSpanProcessor(sp SpanProcessor) {
// This check prevents calls during a shutdown.
if p.isShutdown.Load() {
return
}
p.mu.Lock()
defer p.mu.Unlock()
// This check prevents calls after a shutdown.
if p.isShutdown.Load() {
return
}
old := p.getSpanProcessors()
if len(old) == 0 {
return
}
spss := make(spanProcessorStates, len(old))
copy(spss, old)
// stop the span processor if it is started and remove it from the list
var stopOnce *spanProcessorState
var idx int
for i, sps := range spss {
if sps.sp == sp {
stopOnce = sps
idx = i
}
}
if stopOnce != nil {
stopOnce.state.Do(func() {
if err := sp.Shutdown(context.Background()); err != nil {
otel.Handle(err)
}
})
}
if len(spss) > 1 {
copy(spss[idx:], spss[idx+1:])
}
spss[len(spss)-1] = nil
spss = spss[:len(spss)-1]
p.spanProcessors.Store(&spss)
}
// ForceFlush immediately exports all spans that have not yet been exported for
// all the registered span processors.
func (p *TracerProvider) ForceFlush(ctx context.Context) error {
spss := p.getSpanProcessors()
if len(spss) == 0 {
return nil
}
for _, sps := range spss {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
if err := sps.sp.ForceFlush(ctx); err != nil {
return err
}
}
return nil
}
// Shutdown shuts down TracerProvider. All registered span processors are shut down
// in the order they were registered and any held computational resources are released.
// After Shutdown is called, all methods are no-ops.
func (p *TracerProvider) Shutdown(ctx context.Context) error {
// This check prevents deadlocks in case of recursive shutdown.
if p.isShutdown.Load() {
return nil
}
p.mu.Lock()
defer p.mu.Unlock()
// This check prevents calls after a shutdown has already been done concurrently.
if !p.isShutdown.CompareAndSwap(false, true) { // did toggle?
return nil
}
var retErr error
for _, sps := range p.getSpanProcessors() {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
var err error
sps.state.Do(func() {
err = sps.sp.Shutdown(ctx)
})
if err != nil {
if retErr == nil {
retErr = err
} else {
// Poor man's list of errors
retErr = fmt.Errorf("%w; %w", retErr, err)
}
}
}
p.spanProcessors.Store(&spanProcessorStates{})
return retErr
}
func (p *TracerProvider) getSpanProcessors() spanProcessorStates {
return *(p.spanProcessors.Load())
}
// TracerProviderOption configures a TracerProvider.
type TracerProviderOption interface {
apply(tracerProviderConfig) tracerProviderConfig
}
type traceProviderOptionFunc func(tracerProviderConfig) tracerProviderConfig
func (fn traceProviderOptionFunc) apply(cfg tracerProviderConfig) tracerProviderConfig {
return fn(cfg)
}
// WithSyncer registers the exporter with the TracerProvider using a
// SimpleSpanProcessor.
//
// This is not recommended for production use. The synchronous nature of the
// SimpleSpanProcessor that will wrap the exporter make it good for testing,
// debugging, or showing examples of other feature, but it will be slow and
// have a high computation resource usage overhead. The WithBatcher option is
// recommended for production use instead.
func WithSyncer(e SpanExporter) TracerProviderOption {
return WithSpanProcessor(NewSimpleSpanProcessor(e))
}
// WithBatcher registers the exporter with the TracerProvider using a
// BatchSpanProcessor configured with the passed opts.
func WithBatcher(e SpanExporter, opts ...BatchSpanProcessorOption) TracerProviderOption {
return WithSpanProcessor(NewBatchSpanProcessor(e, opts...))
}
// WithSpanProcessor registers the SpanProcessor with a TracerProvider.
func WithSpanProcessor(sp SpanProcessor) TracerProviderOption {
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
cfg.processors = append(cfg.processors, sp)
return cfg
})
}
// WithResource returns a TracerProviderOption that will configure the
// Resource r as a TracerProvider's Resource. The configured Resource is
// referenced by all the Tracers the TracerProvider creates. It represents the
// entity producing telemetry.
//
// If this option is not used, the TracerProvider will use the
// resource.Default() Resource by default.
func WithResource(r *resource.Resource) TracerProviderOption {
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
var err error
cfg.resource, err = resource.Merge(resource.Environment(), r)
if err != nil {
otel.Handle(err)
}
return cfg
})
}
// WithIDGenerator returns a TracerProviderOption that will configure the
// IDGenerator g as a TracerProvider's IDGenerator. The configured IDGenerator
// is used by the Tracers the TracerProvider creates to generate new Span and
// Trace IDs.
//
// If this option is not used, the TracerProvider will use a random number
// IDGenerator by default.
func WithIDGenerator(g IDGenerator) TracerProviderOption {
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
if g != nil {
cfg.idGenerator = g
}
return cfg
})
}
// WithSampler returns a TracerProviderOption that will configure the Sampler
// s as a TracerProvider's Sampler. The configured Sampler is used by the
// Tracers the TracerProvider creates to make their sampling decisions for the
// Spans they create.
//
// This option overrides the Sampler configured through the OTEL_TRACES_SAMPLER
// and OTEL_TRACES_SAMPLER_ARG environment variables. If this option is not used
// and the sampler is not configured through environment variables or the environment
// contains invalid/unsupported configuration, the TracerProvider will use a
// ParentBased(AlwaysSample) Sampler by default.
func WithSampler(s Sampler) TracerProviderOption {
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
if s != nil {
cfg.sampler = s
}
return cfg
})
}
// WithSpanLimits returns a TracerProviderOption that configures a
// TracerProvider to use the SpanLimits sl. These SpanLimits bound any Span
// created by a Tracer from the TracerProvider.
//
// If any field of sl is zero or negative it will be replaced with the default
// value for that field.
//
// If this or WithRawSpanLimits are not provided, the TracerProvider will use
// the limits defined by environment variables, or the defaults if unset.
// Refer to the NewSpanLimits documentation for information about this
// relationship.
//
// Deprecated: Use WithRawSpanLimits instead which allows setting unlimited
// and zero limits. This option will be kept until the next major version
// incremented release.
func WithSpanLimits(sl SpanLimits) TracerProviderOption {
if sl.AttributeValueLengthLimit <= 0 {
sl.AttributeValueLengthLimit = DefaultAttributeValueLengthLimit
}
if sl.AttributeCountLimit <= 0 {
sl.AttributeCountLimit = DefaultAttributeCountLimit
}
if sl.EventCountLimit <= 0 {
sl.EventCountLimit = DefaultEventCountLimit
}
if sl.AttributePerEventCountLimit <= 0 {
sl.AttributePerEventCountLimit = DefaultAttributePerEventCountLimit
}
if sl.LinkCountLimit <= 0 {
sl.LinkCountLimit = DefaultLinkCountLimit
}
if sl.AttributePerLinkCountLimit <= 0 {
sl.AttributePerLinkCountLimit = DefaultAttributePerLinkCountLimit
}
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
cfg.spanLimits = sl
return cfg
})
}
// WithRawSpanLimits returns a TracerProviderOption that configures a
// TracerProvider to use these limits. These limits bound any Span created by
// a Tracer from the TracerProvider.
//
// The limits will be used as-is. Zero or negative values will not be changed
// to the default value like WithSpanLimits does. Setting a limit to zero will
// effectively disable the related resource it limits and setting to a
// negative value will mean that resource is unlimited. Consequentially, this
// means that the zero-value SpanLimits will disable all span resources.
// Because of this, limits should be constructed using NewSpanLimits and
// updated accordingly.
//
// If this or WithSpanLimits are not provided, the TracerProvider will use the
// limits defined by environment variables, or the defaults if unset. Refer to
// the NewSpanLimits documentation for information about this relationship.
func WithRawSpanLimits(limits SpanLimits) TracerProviderOption {
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
cfg.spanLimits = limits
return cfg
})
}
func applyTracerProviderEnvConfigs(cfg tracerProviderConfig) tracerProviderConfig {
for _, opt := range tracerProviderOptionsFromEnv() {
cfg = opt.apply(cfg)
}
return cfg
}
func tracerProviderOptionsFromEnv() []TracerProviderOption {
var opts []TracerProviderOption
sampler, err := samplerFromEnv()
if err != nil {
otel.Handle(err)
}
if sampler != nil {
opts = append(opts, WithSampler(sampler))
}
return opts
}
// ensureValidTracerProviderConfig ensures that given TracerProviderConfig is valid.
func ensureValidTracerProviderConfig(cfg tracerProviderConfig) tracerProviderConfig {
if cfg.sampler == nil {
cfg.sampler = ParentBased(AlwaysSample())
}
if cfg.idGenerator == nil {
cfg.idGenerator = defaultIDGenerator()
}
if cfg.resource == nil {
cfg.resource = resource.Default()
}
return cfg
}

View File

@ -0,0 +1,97 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"errors"
"fmt"
"os"
"strconv"
"strings"
)
const (
tracesSamplerKey = "OTEL_TRACES_SAMPLER"
tracesSamplerArgKey = "OTEL_TRACES_SAMPLER_ARG"
samplerAlwaysOn = "always_on"
samplerAlwaysOff = "always_off"
samplerTraceIDRatio = "traceidratio"
samplerParentBasedAlwaysOn = "parentbased_always_on"
samplerParsedBasedAlwaysOff = "parentbased_always_off"
samplerParentBasedTraceIDRatio = "parentbased_traceidratio"
)
type errUnsupportedSampler string
func (e errUnsupportedSampler) Error() string {
return fmt.Sprintf("unsupported sampler: %s", string(e))
}
var (
errNegativeTraceIDRatio = errors.New("invalid trace ID ratio: less than 0.0")
errGreaterThanOneTraceIDRatio = errors.New("invalid trace ID ratio: greater than 1.0")
)
type samplerArgParseError struct {
parseErr error
}
func (e samplerArgParseError) Error() string {
return fmt.Sprintf("parsing sampler argument: %s", e.parseErr.Error())
}
func (e samplerArgParseError) Unwrap() error {
return e.parseErr
}
func samplerFromEnv() (Sampler, error) {
sampler, ok := os.LookupEnv(tracesSamplerKey)
if !ok {
return nil, nil
}
sampler = strings.ToLower(strings.TrimSpace(sampler))
samplerArg, hasSamplerArg := os.LookupEnv(tracesSamplerArgKey)
samplerArg = strings.TrimSpace(samplerArg)
switch sampler {
case samplerAlwaysOn:
return AlwaysSample(), nil
case samplerAlwaysOff:
return NeverSample(), nil
case samplerTraceIDRatio:
if !hasSamplerArg {
return TraceIDRatioBased(1.0), nil
}
return parseTraceIDRatio(samplerArg)
case samplerParentBasedAlwaysOn:
return ParentBased(AlwaysSample()), nil
case samplerParsedBasedAlwaysOff:
return ParentBased(NeverSample()), nil
case samplerParentBasedTraceIDRatio:
if !hasSamplerArg {
return ParentBased(TraceIDRatioBased(1.0)), nil
}
ratio, err := parseTraceIDRatio(samplerArg)
return ParentBased(ratio), err
default:
return nil, errUnsupportedSampler(sampler)
}
}
func parseTraceIDRatio(arg string) (Sampler, error) {
v, err := strconv.ParseFloat(arg, 64)
if err != nil {
return TraceIDRatioBased(1.0), samplerArgParseError{err}
}
if v < 0.0 {
return TraceIDRatioBased(1.0), errNegativeTraceIDRatio
}
if v > 1.0 {
return TraceIDRatioBased(1.0), errGreaterThanOneTraceIDRatio
}
return TraceIDRatioBased(v), nil
}

282
vendor/go.opentelemetry.io/otel/sdk/trace/sampling.go generated vendored Normal file
View File

@ -0,0 +1,282 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"encoding/binary"
"fmt"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
// Sampler decides whether a trace should be sampled and exported.
type Sampler interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// ShouldSample returns a SamplingResult based on a decision made from the
// passed parameters.
ShouldSample(parameters SamplingParameters) SamplingResult
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Description returns information describing the Sampler.
Description() string
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
// SamplingParameters contains the values passed to a Sampler.
type SamplingParameters struct {
ParentContext context.Context
TraceID trace.TraceID
Name string
Kind trace.SpanKind
Attributes []attribute.KeyValue
Links []trace.Link
}
// SamplingDecision indicates whether a span is dropped, recorded and/or sampled.
type SamplingDecision uint8
// Valid sampling decisions.
const (
// Drop will not record the span and all attributes/events will be dropped.
Drop SamplingDecision = iota
// Record indicates the span's `IsRecording() == true`, but `Sampled` flag
// *must not* be set.
RecordOnly
// RecordAndSample has span's `IsRecording() == true` and `Sampled` flag
// *must* be set.
RecordAndSample
)
// SamplingResult conveys a SamplingDecision, set of Attributes and a Tracestate.
type SamplingResult struct {
Decision SamplingDecision
Attributes []attribute.KeyValue
Tracestate trace.TraceState
}
type traceIDRatioSampler struct {
traceIDUpperBound uint64
description string
}
func (ts traceIDRatioSampler) ShouldSample(p SamplingParameters) SamplingResult {
psc := trace.SpanContextFromContext(p.ParentContext)
x := binary.BigEndian.Uint64(p.TraceID[8:16]) >> 1
if x < ts.traceIDUpperBound {
return SamplingResult{
Decision: RecordAndSample,
Tracestate: psc.TraceState(),
}
}
return SamplingResult{
Decision: Drop,
Tracestate: psc.TraceState(),
}
}
func (ts traceIDRatioSampler) Description() string {
return ts.description
}
// TraceIDRatioBased samples a given fraction of traces. Fractions >= 1 will
// always sample. Fractions < 0 are treated as zero. To respect the
// parent trace's `SampledFlag`, the `TraceIDRatioBased` sampler should be used
// as a delegate of a `Parent` sampler.
//
//nolint:revive // revive complains about stutter of `trace.TraceIDRatioBased`
func TraceIDRatioBased(fraction float64) Sampler {
if fraction >= 1 {
return AlwaysSample()
}
if fraction <= 0 {
fraction = 0
}
return &traceIDRatioSampler{
traceIDUpperBound: uint64(fraction * (1 << 63)),
description: fmt.Sprintf("TraceIDRatioBased{%g}", fraction),
}
}
type alwaysOnSampler struct{}
func (as alwaysOnSampler) ShouldSample(p SamplingParameters) SamplingResult {
return SamplingResult{
Decision: RecordAndSample,
Tracestate: trace.SpanContextFromContext(p.ParentContext).TraceState(),
}
}
func (as alwaysOnSampler) Description() string {
return "AlwaysOnSampler"
}
// AlwaysSample returns a Sampler that samples every trace.
// Be careful about using this sampler in a production application with
// significant traffic: a new trace will be started and exported for every
// request.
func AlwaysSample() Sampler {
return alwaysOnSampler{}
}
type alwaysOffSampler struct{}
func (as alwaysOffSampler) ShouldSample(p SamplingParameters) SamplingResult {
return SamplingResult{
Decision: Drop,
Tracestate: trace.SpanContextFromContext(p.ParentContext).TraceState(),
}
}
func (as alwaysOffSampler) Description() string {
return "AlwaysOffSampler"
}
// NeverSample returns a Sampler that samples no traces.
func NeverSample() Sampler {
return alwaysOffSampler{}
}
// ParentBased returns a sampler decorator which behaves differently,
// based on the parent of the span. If the span has no parent,
// the decorated sampler is used to make sampling decision. If the span has
// a parent, depending on whether the parent is remote and whether it
// is sampled, one of the following samplers will apply:
// - remoteParentSampled(Sampler) (default: AlwaysOn)
// - remoteParentNotSampled(Sampler) (default: AlwaysOff)
// - localParentSampled(Sampler) (default: AlwaysOn)
// - localParentNotSampled(Sampler) (default: AlwaysOff)
func ParentBased(root Sampler, samplers ...ParentBasedSamplerOption) Sampler {
return parentBased{
root: root,
config: configureSamplersForParentBased(samplers),
}
}
type parentBased struct {
root Sampler
config samplerConfig
}
func configureSamplersForParentBased(samplers []ParentBasedSamplerOption) samplerConfig {
c := samplerConfig{
remoteParentSampled: AlwaysSample(),
remoteParentNotSampled: NeverSample(),
localParentSampled: AlwaysSample(),
localParentNotSampled: NeverSample(),
}
for _, so := range samplers {
c = so.apply(c)
}
return c
}
// samplerConfig is a group of options for parentBased sampler.
type samplerConfig struct {
remoteParentSampled, remoteParentNotSampled Sampler
localParentSampled, localParentNotSampled Sampler
}
// ParentBasedSamplerOption configures the sampler for a particular sampling case.
type ParentBasedSamplerOption interface {
apply(samplerConfig) samplerConfig
}
// WithRemoteParentSampled sets the sampler for the case of sampled remote parent.
func WithRemoteParentSampled(s Sampler) ParentBasedSamplerOption {
return remoteParentSampledOption{s}
}
type remoteParentSampledOption struct {
s Sampler
}
func (o remoteParentSampledOption) apply(config samplerConfig) samplerConfig {
config.remoteParentSampled = o.s
return config
}
// WithRemoteParentNotSampled sets the sampler for the case of remote parent
// which is not sampled.
func WithRemoteParentNotSampled(s Sampler) ParentBasedSamplerOption {
return remoteParentNotSampledOption{s}
}
type remoteParentNotSampledOption struct {
s Sampler
}
func (o remoteParentNotSampledOption) apply(config samplerConfig) samplerConfig {
config.remoteParentNotSampled = o.s
return config
}
// WithLocalParentSampled sets the sampler for the case of sampled local parent.
func WithLocalParentSampled(s Sampler) ParentBasedSamplerOption {
return localParentSampledOption{s}
}
type localParentSampledOption struct {
s Sampler
}
func (o localParentSampledOption) apply(config samplerConfig) samplerConfig {
config.localParentSampled = o.s
return config
}
// WithLocalParentNotSampled sets the sampler for the case of local parent
// which is not sampled.
func WithLocalParentNotSampled(s Sampler) ParentBasedSamplerOption {
return localParentNotSampledOption{s}
}
type localParentNotSampledOption struct {
s Sampler
}
func (o localParentNotSampledOption) apply(config samplerConfig) samplerConfig {
config.localParentNotSampled = o.s
return config
}
func (pb parentBased) ShouldSample(p SamplingParameters) SamplingResult {
psc := trace.SpanContextFromContext(p.ParentContext)
if psc.IsValid() {
if psc.IsRemote() {
if psc.IsSampled() {
return pb.config.remoteParentSampled.ShouldSample(p)
}
return pb.config.remoteParentNotSampled.ShouldSample(p)
}
if psc.IsSampled() {
return pb.config.localParentSampled.ShouldSample(p)
}
return pb.config.localParentNotSampled.ShouldSample(p)
}
return pb.root.ShouldSample(p)
}
func (pb parentBased) Description() string {
return fmt.Sprintf("ParentBased{root:%s,remoteParentSampled:%s,"+
"remoteParentNotSampled:%s,localParentSampled:%s,localParentNotSampled:%s}",
pb.root.Description(),
pb.config.remoteParentSampled.Description(),
pb.config.remoteParentNotSampled.Description(),
pb.config.localParentSampled.Description(),
pb.config.localParentNotSampled.Description(),
)
}

View File

@ -0,0 +1,121 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"sync"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/internal/global"
)
// simpleSpanProcessor is a SpanProcessor that synchronously sends all
// completed Spans to a trace.Exporter immediately.
type simpleSpanProcessor struct {
exporterMu sync.Mutex
exporter SpanExporter
stopOnce sync.Once
}
var _ SpanProcessor = (*simpleSpanProcessor)(nil)
// NewSimpleSpanProcessor returns a new SpanProcessor that will synchronously
// send completed spans to the exporter immediately.
//
// This SpanProcessor is not recommended for production use. The synchronous
// nature of this SpanProcessor makes it good for testing, debugging, or showing
// examples of other features, but it will be slow and have a high computation
// resource usage overhead. The BatchSpanProcessor is recommended for production
// use instead.
func NewSimpleSpanProcessor(exporter SpanExporter) SpanProcessor {
ssp := &simpleSpanProcessor{
exporter: exporter,
}
global.Warn("SimpleSpanProcessor is not recommended for production use, consider using BatchSpanProcessor instead.")
return ssp
}
// OnStart does nothing.
func (ssp *simpleSpanProcessor) OnStart(context.Context, ReadWriteSpan) {}
// OnEnd immediately exports a ReadOnlySpan.
func (ssp *simpleSpanProcessor) OnEnd(s ReadOnlySpan) {
ssp.exporterMu.Lock()
defer ssp.exporterMu.Unlock()
if ssp.exporter != nil && s.SpanContext().TraceFlags().IsSampled() {
if err := ssp.exporter.ExportSpans(context.Background(), []ReadOnlySpan{s}); err != nil {
otel.Handle(err)
}
}
}
// Shutdown shuts down the exporter this SimpleSpanProcessor exports to.
func (ssp *simpleSpanProcessor) Shutdown(ctx context.Context) error {
var err error
ssp.stopOnce.Do(func() {
stopFunc := func(exp SpanExporter) (<-chan error, func()) {
done := make(chan error)
return done, func() { done <- exp.Shutdown(ctx) }
}
// The exporter field of the simpleSpanProcessor needs to be zeroed to
// signal it is shut down, meaning all subsequent calls to OnEnd will
// be gracefully ignored. This needs to be done synchronously to avoid
// any race condition.
//
// A closure is used to keep reference to the exporter and then the
// field is zeroed. This ensures the simpleSpanProcessor is shut down
// before the exporter. This order is important as it avoids a potential
// deadlock. If the exporter shut down operation generates a span, that
// span would need to be exported. Meaning, OnEnd would be called and
// try acquiring the lock that is held here.
ssp.exporterMu.Lock()
done, shutdown := stopFunc(ssp.exporter)
ssp.exporter = nil
ssp.exporterMu.Unlock()
go shutdown()
// Wait for the exporter to shut down or the deadline to expire.
select {
case err = <-done:
case <-ctx.Done():
// It is possible for the exporter to have immediately shut down and
// the context to be done simultaneously. In that case this outer
// select statement will randomly choose a case. This will result in
// a different returned error for similar scenarios. Instead, double
// check if the exporter shut down at the same time and return that
// error if so. This will ensure consistency as well as ensure
// the caller knows the exporter shut down successfully (they can
// already determine if the deadline is expired given they passed
// the context).
select {
case err = <-done:
default:
err = ctx.Err()
}
}
})
return err
}
// ForceFlush does nothing as there is no data to flush.
func (ssp *simpleSpanProcessor) ForceFlush(context.Context) error {
return nil
}
// MarshalLog is the marshaling function used by the logging system to represent
// this Span Processor.
func (ssp *simpleSpanProcessor) MarshalLog() interface{} {
return struct {
Type string
Exporter SpanExporter
}{
Type: "SimpleSpanProcessor",
Exporter: ssp.exporter,
}
}

133
vendor/go.opentelemetry.io/otel/sdk/trace/snapshot.go generated vendored Normal file
View File

@ -0,0 +1,133 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"time"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/instrumentation"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/trace"
)
// snapshot is an record of a spans state at a particular checkpointed time.
// It is used as a read-only representation of that state.
type snapshot struct {
name string
spanContext trace.SpanContext
parent trace.SpanContext
spanKind trace.SpanKind
startTime time.Time
endTime time.Time
attributes []attribute.KeyValue
events []Event
links []Link
status Status
childSpanCount int
droppedAttributeCount int
droppedEventCount int
droppedLinkCount int
resource *resource.Resource
instrumentationScope instrumentation.Scope
}
var _ ReadOnlySpan = snapshot{}
func (s snapshot) private() {}
// Name returns the name of the span.
func (s snapshot) Name() string {
return s.name
}
// SpanContext returns the unique SpanContext that identifies the span.
func (s snapshot) SpanContext() trace.SpanContext {
return s.spanContext
}
// Parent returns the unique SpanContext that identifies the parent of the
// span if one exists. If the span has no parent the returned SpanContext
// will be invalid.
func (s snapshot) Parent() trace.SpanContext {
return s.parent
}
// SpanKind returns the role the span plays in a Trace.
func (s snapshot) SpanKind() trace.SpanKind {
return s.spanKind
}
// StartTime returns the time the span started recording.
func (s snapshot) StartTime() time.Time {
return s.startTime
}
// EndTime returns the time the span stopped recording. It will be zero if
// the span has not ended.
func (s snapshot) EndTime() time.Time {
return s.endTime
}
// Attributes returns the defining attributes of the span.
func (s snapshot) Attributes() []attribute.KeyValue {
return s.attributes
}
// Links returns all the links the span has to other spans.
func (s snapshot) Links() []Link {
return s.links
}
// Events returns all the events that occurred within in the spans
// lifetime.
func (s snapshot) Events() []Event {
return s.events
}
// Status returns the spans status.
func (s snapshot) Status() Status {
return s.status
}
// InstrumentationScope returns information about the instrumentation
// scope that created the span.
func (s snapshot) InstrumentationScope() instrumentation.Scope {
return s.instrumentationScope
}
// InstrumentationLibrary returns information about the instrumentation
// library that created the span.
func (s snapshot) InstrumentationLibrary() instrumentation.Library {
return s.instrumentationScope
}
// Resource returns information about the entity that produced the span.
func (s snapshot) Resource() *resource.Resource {
return s.resource
}
// DroppedAttributes returns the number of attributes dropped by the span
// due to limits being reached.
func (s snapshot) DroppedAttributes() int {
return s.droppedAttributeCount
}
// DroppedLinks returns the number of links dropped by the span due to limits
// being reached.
func (s snapshot) DroppedLinks() int {
return s.droppedLinkCount
}
// DroppedEvents returns the number of events dropped by the span due to
// limits being reached.
func (s snapshot) DroppedEvents() int {
return s.droppedEventCount
}
// ChildSpanCount returns the count of spans that consider the span a
// direct parent.
func (s snapshot) ChildSpanCount() int {
return s.childSpanCount
}

846
vendor/go.opentelemetry.io/otel/sdk/trace/span.go generated vendored Normal file
View File

@ -0,0 +1,846 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"fmt"
"reflect"
"runtime"
rt "runtime/trace"
"slices"
"strings"
"sync"
"time"
"unicode/utf8"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/sdk/instrumentation"
"go.opentelemetry.io/otel/sdk/resource"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/otel/trace/embedded"
)
// ReadOnlySpan allows reading information from the data structure underlying a
// trace.Span. It is used in places where reading information from a span is
// necessary but changing the span isn't necessary or allowed.
//
// Warning: methods may be added to this interface in minor releases.
type ReadOnlySpan interface {
// Name returns the name of the span.
Name() string
// SpanContext returns the unique SpanContext that identifies the span.
SpanContext() trace.SpanContext
// Parent returns the unique SpanContext that identifies the parent of the
// span if one exists. If the span has no parent the returned SpanContext
// will be invalid.
Parent() trace.SpanContext
// SpanKind returns the role the span plays in a Trace.
SpanKind() trace.SpanKind
// StartTime returns the time the span started recording.
StartTime() time.Time
// EndTime returns the time the span stopped recording. It will be zero if
// the span has not ended.
EndTime() time.Time
// Attributes returns the defining attributes of the span.
// The order of the returned attributes is not guaranteed to be stable across invocations.
Attributes() []attribute.KeyValue
// Links returns all the links the span has to other spans.
Links() []Link
// Events returns all the events that occurred within in the spans
// lifetime.
Events() []Event
// Status returns the spans status.
Status() Status
// InstrumentationScope returns information about the instrumentation
// scope that created the span.
InstrumentationScope() instrumentation.Scope
// InstrumentationLibrary returns information about the instrumentation
// library that created the span.
// Deprecated: please use InstrumentationScope instead.
InstrumentationLibrary() instrumentation.Library
// Resource returns information about the entity that produced the span.
Resource() *resource.Resource
// DroppedAttributes returns the number of attributes dropped by the span
// due to limits being reached.
DroppedAttributes() int
// DroppedLinks returns the number of links dropped by the span due to
// limits being reached.
DroppedLinks() int
// DroppedEvents returns the number of events dropped by the span due to
// limits being reached.
DroppedEvents() int
// ChildSpanCount returns the count of spans that consider the span a
// direct parent.
ChildSpanCount() int
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate compatibility.
private()
}
// ReadWriteSpan exposes the same methods as trace.Span and in addition allows
// reading information from the underlying data structure.
// This interface exposes the union of the methods of trace.Span (which is a
// "write-only" span) and ReadOnlySpan. New methods for writing or reading span
// information should be added under trace.Span or ReadOnlySpan, respectively.
//
// Warning: methods may be added to this interface in minor releases.
type ReadWriteSpan interface {
trace.Span
ReadOnlySpan
}
// recordingSpan is an implementation of the OpenTelemetry Span API
// representing the individual component of a trace that is sampled.
type recordingSpan struct {
embedded.Span
// mu protects the contents of this span.
mu sync.Mutex
// parent holds the parent span of this span as a trace.SpanContext.
parent trace.SpanContext
// spanKind represents the kind of this span as a trace.SpanKind.
spanKind trace.SpanKind
// name is the name of this span.
name string
// startTime is the time at which this span was started.
startTime time.Time
// endTime is the time at which this span was ended. It contains the zero
// value of time.Time until the span is ended.
endTime time.Time
// status is the status of this span.
status Status
// childSpanCount holds the number of child spans created for this span.
childSpanCount int
// spanContext holds the SpanContext of this span.
spanContext trace.SpanContext
// attributes is a collection of user provided key/values. The collection
// is constrained by a configurable maximum held by the parent
// TracerProvider. When additional attributes are added after this maximum
// is reached these attributes the user is attempting to add are dropped.
// This dropped number of attributes is tracked and reported in the
// ReadOnlySpan exported when the span ends.
attributes []attribute.KeyValue
droppedAttributes int
logDropAttrsOnce sync.Once
// events are stored in FIFO queue capped by configured limit.
events evictedQueue[Event]
// links are stored in FIFO queue capped by configured limit.
links evictedQueue[Link]
// executionTracerTaskEnd ends the execution tracer span.
executionTracerTaskEnd func()
// tracer is the SDK tracer that created this span.
tracer *tracer
}
var (
_ ReadWriteSpan = (*recordingSpan)(nil)
_ runtimeTracer = (*recordingSpan)(nil)
)
// SpanContext returns the SpanContext of this span.
func (s *recordingSpan) SpanContext() trace.SpanContext {
if s == nil {
return trace.SpanContext{}
}
return s.spanContext
}
// IsRecording returns if this span is being recorded. If this span has ended
// this will return false.
func (s *recordingSpan) IsRecording() bool {
if s == nil {
return false
}
s.mu.Lock()
defer s.mu.Unlock()
return s.endTime.IsZero()
}
// SetStatus sets the status of the Span in the form of a code and a
// description, overriding previous values set. The description is only
// included in the set status when the code is for an error. If this span is
// not being recorded than this method does nothing.
func (s *recordingSpan) SetStatus(code codes.Code, description string) {
if !s.IsRecording() {
return
}
s.mu.Lock()
defer s.mu.Unlock()
if s.status.Code > code {
return
}
status := Status{Code: code}
if code == codes.Error {
status.Description = description
}
s.status = status
}
// SetAttributes sets attributes of this span.
//
// If a key from attributes already exists the value associated with that key
// will be overwritten with the value contained in attributes.
//
// If this span is not being recorded than this method does nothing.
//
// If adding attributes to the span would exceed the maximum amount of
// attributes the span is configured to have, the last added attributes will
// be dropped.
func (s *recordingSpan) SetAttributes(attributes ...attribute.KeyValue) {
if !s.IsRecording() {
return
}
s.mu.Lock()
defer s.mu.Unlock()
limit := s.tracer.provider.spanLimits.AttributeCountLimit
if limit == 0 {
// No attributes allowed.
s.addDroppedAttr(len(attributes))
return
}
// If adding these attributes could exceed the capacity of s perform a
// de-duplication and truncation while adding to avoid over allocation.
if limit > 0 && len(s.attributes)+len(attributes) > limit {
s.addOverCapAttrs(limit, attributes)
return
}
// Otherwise, add without deduplication. When attributes are read they
// will be deduplicated, optimizing the operation.
s.attributes = slices.Grow(s.attributes, len(s.attributes)+len(attributes))
for _, a := range attributes {
if !a.Valid() {
// Drop all invalid attributes.
s.addDroppedAttr(1)
continue
}
a = truncateAttr(s.tracer.provider.spanLimits.AttributeValueLengthLimit, a)
s.attributes = append(s.attributes, a)
}
}
// Declared as a var so tests can override.
var logDropAttrs = func() {
global.Warn("limit reached: dropping trace Span attributes")
}
// addDroppedAttr adds incr to the count of dropped attributes.
//
// The first, and only the first, time this method is called a warning will be
// logged.
//
// This method assumes s.mu.Lock is held by the caller.
func (s *recordingSpan) addDroppedAttr(incr int) {
s.droppedAttributes += incr
s.logDropAttrsOnce.Do(logDropAttrs)
}
// addOverCapAttrs adds the attributes attrs to the span s while
// de-duplicating the attributes of s and attrs and dropping attributes that
// exceed the limit.
//
// This method assumes s.mu.Lock is held by the caller.
//
// This method should only be called when there is a possibility that adding
// attrs to s will exceed the limit. Otherwise, attrs should be added to s
// without checking for duplicates and all retrieval methods of the attributes
// for s will de-duplicate as needed.
//
// This method assumes limit is a value > 0. The argument should be validated
// by the caller.
func (s *recordingSpan) addOverCapAttrs(limit int, attrs []attribute.KeyValue) {
// In order to not allocate more capacity to s.attributes than needed,
// prune and truncate this addition of attributes while adding.
// Do not set a capacity when creating this map. Benchmark testing has
// showed this to only add unused memory allocations in general use.
exists := make(map[attribute.Key]int)
s.dedupeAttrsFromRecord(&exists)
// Now that s.attributes is deduplicated, adding unique attributes up to
// the capacity of s will not over allocate s.attributes.
sum := len(attrs) + len(s.attributes)
s.attributes = slices.Grow(s.attributes, min(sum, limit))
for _, a := range attrs {
if !a.Valid() {
// Drop all invalid attributes.
s.addDroppedAttr(1)
continue
}
if idx, ok := exists[a.Key]; ok {
// Perform all updates before dropping, even when at capacity.
s.attributes[idx] = a
continue
}
if len(s.attributes) >= limit {
// Do not just drop all of the remaining attributes, make sure
// updates are checked and performed.
s.addDroppedAttr(1)
} else {
a = truncateAttr(s.tracer.provider.spanLimits.AttributeValueLengthLimit, a)
s.attributes = append(s.attributes, a)
exists[a.Key] = len(s.attributes) - 1
}
}
}
// truncateAttr returns a truncated version of attr. Only string and string
// slice attribute values are truncated. String values are truncated to at
// most a length of limit. Each string slice value is truncated in this fashion
// (the slice length itself is unaffected).
//
// No truncation is performed for a negative limit.
func truncateAttr(limit int, attr attribute.KeyValue) attribute.KeyValue {
if limit < 0 {
return attr
}
switch attr.Value.Type() {
case attribute.STRING:
if v := attr.Value.AsString(); len(v) > limit {
return attr.Key.String(safeTruncate(v, limit))
}
case attribute.STRINGSLICE:
v := attr.Value.AsStringSlice()
for i := range v {
if len(v[i]) > limit {
v[i] = safeTruncate(v[i], limit)
}
}
return attr.Key.StringSlice(v)
}
return attr
}
// safeTruncate truncates the string and guarantees valid UTF-8 is returned.
func safeTruncate(input string, limit int) string {
if trunc, ok := safeTruncateValidUTF8(input, limit); ok {
return trunc
}
trunc, _ := safeTruncateValidUTF8(strings.ToValidUTF8(input, ""), limit)
return trunc
}
// safeTruncateValidUTF8 returns a copy of the input string safely truncated to
// limit. The truncation is ensured to occur at the bounds of complete UTF-8
// characters. If invalid encoding of UTF-8 is encountered, input is returned
// with false, otherwise, the truncated input will be returned with true.
func safeTruncateValidUTF8(input string, limit int) (string, bool) {
for cnt := 0; cnt <= limit; {
r, size := utf8.DecodeRuneInString(input[cnt:])
if r == utf8.RuneError {
return input, false
}
if cnt+size > limit {
return input[:cnt], true
}
cnt += size
}
return input, true
}
// End ends the span. This method does nothing if the span is already ended or
// is not being recorded.
//
// The only SpanOption currently supported is WithTimestamp which will set the
// end time for a Span's life-cycle.
//
// If this method is called while panicking an error event is added to the
// Span before ending it and the panic is continued.
func (s *recordingSpan) End(options ...trace.SpanEndOption) {
// Do not start by checking if the span is being recorded which requires
// acquiring a lock. Make a minimal check that the span is not nil.
if s == nil {
return
}
// Store the end time as soon as possible to avoid artificially increasing
// the span's duration in case some operation below takes a while.
et := monotonicEndTime(s.startTime)
// Do relative expensive check now that we have an end time and see if we
// need to do any more processing.
if !s.IsRecording() {
return
}
config := trace.NewSpanEndConfig(options...)
if recovered := recover(); recovered != nil {
// Record but don't stop the panic.
defer panic(recovered)
opts := []trace.EventOption{
trace.WithAttributes(
semconv.ExceptionType(typeStr(recovered)),
semconv.ExceptionMessage(fmt.Sprint(recovered)),
),
}
if config.StackTrace() {
opts = append(opts, trace.WithAttributes(
semconv.ExceptionStacktrace(recordStackTrace()),
))
}
s.addEvent(semconv.ExceptionEventName, opts...)
}
if s.executionTracerTaskEnd != nil {
s.executionTracerTaskEnd()
}
s.mu.Lock()
// Setting endTime to non-zero marks the span as ended and not recording.
if config.Timestamp().IsZero() {
s.endTime = et
} else {
s.endTime = config.Timestamp()
}
s.mu.Unlock()
sps := s.tracer.provider.getSpanProcessors()
if len(sps) == 0 {
return
}
snap := s.snapshot()
for _, sp := range sps {
sp.sp.OnEnd(snap)
}
}
// monotonicEndTime returns the end time at present but offset from start,
// monotonically.
//
// The monotonic clock is used in subtractions hence the duration since start
// added back to start gives end as a monotonic time. See
// https://golang.org/pkg/time/#hdr-Monotonic_Clocks
func monotonicEndTime(start time.Time) time.Time {
return start.Add(time.Since(start))
}
// RecordError will record err as a span event for this span. An additional call to
// SetStatus is required if the Status of the Span should be set to Error, this method
// does not change the Span status. If this span is not being recorded or err is nil
// than this method does nothing.
func (s *recordingSpan) RecordError(err error, opts ...trace.EventOption) {
if s == nil || err == nil || !s.IsRecording() {
return
}
opts = append(opts, trace.WithAttributes(
semconv.ExceptionType(typeStr(err)),
semconv.ExceptionMessage(err.Error()),
))
c := trace.NewEventConfig(opts...)
if c.StackTrace() {
opts = append(opts, trace.WithAttributes(
semconv.ExceptionStacktrace(recordStackTrace()),
))
}
s.addEvent(semconv.ExceptionEventName, opts...)
}
func typeStr(i interface{}) string {
t := reflect.TypeOf(i)
if t.PkgPath() == "" && t.Name() == "" {
// Likely a builtin type.
return t.String()
}
return fmt.Sprintf("%s.%s", t.PkgPath(), t.Name())
}
func recordStackTrace() string {
stackTrace := make([]byte, 2048)
n := runtime.Stack(stackTrace, false)
return string(stackTrace[0:n])
}
// AddEvent adds an event with the provided name and options. If this span is
// not being recorded than this method does nothing.
func (s *recordingSpan) AddEvent(name string, o ...trace.EventOption) {
if !s.IsRecording() {
return
}
s.addEvent(name, o...)
}
func (s *recordingSpan) addEvent(name string, o ...trace.EventOption) {
c := trace.NewEventConfig(o...)
e := Event{Name: name, Attributes: c.Attributes(), Time: c.Timestamp()}
// Discard attributes over limit.
limit := s.tracer.provider.spanLimits.AttributePerEventCountLimit
if limit == 0 {
// Drop all attributes.
e.DroppedAttributeCount = len(e.Attributes)
e.Attributes = nil
} else if limit > 0 && len(e.Attributes) > limit {
// Drop over capacity.
e.DroppedAttributeCount = len(e.Attributes) - limit
e.Attributes = e.Attributes[:limit]
}
s.mu.Lock()
s.events.add(e)
s.mu.Unlock()
}
// SetName sets the name of this span. If this span is not being recorded than
// this method does nothing.
func (s *recordingSpan) SetName(name string) {
if !s.IsRecording() {
return
}
s.mu.Lock()
defer s.mu.Unlock()
s.name = name
}
// Name returns the name of this span.
func (s *recordingSpan) Name() string {
s.mu.Lock()
defer s.mu.Unlock()
return s.name
}
// Name returns the SpanContext of this span's parent span.
func (s *recordingSpan) Parent() trace.SpanContext {
s.mu.Lock()
defer s.mu.Unlock()
return s.parent
}
// SpanKind returns the SpanKind of this span.
func (s *recordingSpan) SpanKind() trace.SpanKind {
s.mu.Lock()
defer s.mu.Unlock()
return s.spanKind
}
// StartTime returns the time this span started.
func (s *recordingSpan) StartTime() time.Time {
s.mu.Lock()
defer s.mu.Unlock()
return s.startTime
}
// EndTime returns the time this span ended. For spans that have not yet
// ended, the returned value will be the zero value of time.Time.
func (s *recordingSpan) EndTime() time.Time {
s.mu.Lock()
defer s.mu.Unlock()
return s.endTime
}
// Attributes returns the attributes of this span.
//
// The order of the returned attributes is not guaranteed to be stable.
func (s *recordingSpan) Attributes() []attribute.KeyValue {
s.mu.Lock()
defer s.mu.Unlock()
s.dedupeAttrs()
return s.attributes
}
// dedupeAttrs deduplicates the attributes of s to fit capacity.
//
// This method assumes s.mu.Lock is held by the caller.
func (s *recordingSpan) dedupeAttrs() {
// Do not set a capacity when creating this map. Benchmark testing has
// showed this to only add unused memory allocations in general use.
exists := make(map[attribute.Key]int)
s.dedupeAttrsFromRecord(&exists)
}
// dedupeAttrsFromRecord deduplicates the attributes of s to fit capacity
// using record as the record of unique attribute keys to their index.
//
// This method assumes s.mu.Lock is held by the caller.
func (s *recordingSpan) dedupeAttrsFromRecord(record *map[attribute.Key]int) {
// Use the fact that slices share the same backing array.
unique := s.attributes[:0]
for _, a := range s.attributes {
if idx, ok := (*record)[a.Key]; ok {
unique[idx] = a
} else {
unique = append(unique, a)
(*record)[a.Key] = len(unique) - 1
}
}
// s.attributes have element types of attribute.KeyValue. These types are
// not pointers and they themselves do not contain pointer fields,
// therefore the duplicate values do not need to be zeroed for them to be
// garbage collected.
s.attributes = unique
}
// Links returns the links of this span.
func (s *recordingSpan) Links() []Link {
s.mu.Lock()
defer s.mu.Unlock()
if len(s.links.queue) == 0 {
return []Link{}
}
return s.links.copy()
}
// Events returns the events of this span.
func (s *recordingSpan) Events() []Event {
s.mu.Lock()
defer s.mu.Unlock()
if len(s.events.queue) == 0 {
return []Event{}
}
return s.events.copy()
}
// Status returns the status of this span.
func (s *recordingSpan) Status() Status {
s.mu.Lock()
defer s.mu.Unlock()
return s.status
}
// InstrumentationScope returns the instrumentation.Scope associated with
// the Tracer that created this span.
func (s *recordingSpan) InstrumentationScope() instrumentation.Scope {
s.mu.Lock()
defer s.mu.Unlock()
return s.tracer.instrumentationScope
}
// InstrumentationLibrary returns the instrumentation.Library associated with
// the Tracer that created this span.
func (s *recordingSpan) InstrumentationLibrary() instrumentation.Library {
s.mu.Lock()
defer s.mu.Unlock()
return s.tracer.instrumentationScope
}
// Resource returns the Resource associated with the Tracer that created this
// span.
func (s *recordingSpan) Resource() *resource.Resource {
s.mu.Lock()
defer s.mu.Unlock()
return s.tracer.provider.resource
}
func (s *recordingSpan) AddLink(link trace.Link) {
if !s.IsRecording() {
return
}
if !link.SpanContext.IsValid() && len(link.Attributes) == 0 &&
link.SpanContext.TraceState().Len() == 0 {
return
}
l := Link{SpanContext: link.SpanContext, Attributes: link.Attributes}
// Discard attributes over limit.
limit := s.tracer.provider.spanLimits.AttributePerLinkCountLimit
if limit == 0 {
// Drop all attributes.
l.DroppedAttributeCount = len(l.Attributes)
l.Attributes = nil
} else if limit > 0 && len(l.Attributes) > limit {
l.DroppedAttributeCount = len(l.Attributes) - limit
l.Attributes = l.Attributes[:limit]
}
s.mu.Lock()
s.links.add(l)
s.mu.Unlock()
}
// DroppedAttributes returns the number of attributes dropped by the span
// due to limits being reached.
func (s *recordingSpan) DroppedAttributes() int {
s.mu.Lock()
defer s.mu.Unlock()
return s.droppedAttributes
}
// DroppedLinks returns the number of links dropped by the span due to limits
// being reached.
func (s *recordingSpan) DroppedLinks() int {
s.mu.Lock()
defer s.mu.Unlock()
return s.links.droppedCount
}
// DroppedEvents returns the number of events dropped by the span due to
// limits being reached.
func (s *recordingSpan) DroppedEvents() int {
s.mu.Lock()
defer s.mu.Unlock()
return s.events.droppedCount
}
// ChildSpanCount returns the count of spans that consider the span a
// direct parent.
func (s *recordingSpan) ChildSpanCount() int {
s.mu.Lock()
defer s.mu.Unlock()
return s.childSpanCount
}
// TracerProvider returns a trace.TracerProvider that can be used to generate
// additional Spans on the same telemetry pipeline as the current Span.
func (s *recordingSpan) TracerProvider() trace.TracerProvider {
return s.tracer.provider
}
// snapshot creates a read-only copy of the current state of the span.
func (s *recordingSpan) snapshot() ReadOnlySpan {
var sd snapshot
s.mu.Lock()
defer s.mu.Unlock()
sd.endTime = s.endTime
sd.instrumentationScope = s.tracer.instrumentationScope
sd.name = s.name
sd.parent = s.parent
sd.resource = s.tracer.provider.resource
sd.spanContext = s.spanContext
sd.spanKind = s.spanKind
sd.startTime = s.startTime
sd.status = s.status
sd.childSpanCount = s.childSpanCount
if len(s.attributes) > 0 {
s.dedupeAttrs()
sd.attributes = s.attributes
}
sd.droppedAttributeCount = s.droppedAttributes
if len(s.events.queue) > 0 {
sd.events = s.events.copy()
sd.droppedEventCount = s.events.droppedCount
}
if len(s.links.queue) > 0 {
sd.links = s.links.copy()
sd.droppedLinkCount = s.links.droppedCount
}
return &sd
}
func (s *recordingSpan) addChild() {
if !s.IsRecording() {
return
}
s.mu.Lock()
s.childSpanCount++
s.mu.Unlock()
}
func (*recordingSpan) private() {}
// runtimeTrace starts a "runtime/trace".Task for the span and returns a
// context containing the task.
func (s *recordingSpan) runtimeTrace(ctx context.Context) context.Context {
if !rt.IsEnabled() {
// Avoid additional overhead if runtime/trace is not enabled.
return ctx
}
nctx, task := rt.NewTask(ctx, s.name)
s.mu.Lock()
s.executionTracerTaskEnd = task.End
s.mu.Unlock()
return nctx
}
// nonRecordingSpan is a minimal implementation of the OpenTelemetry Span API
// that wraps a SpanContext. It performs no operations other than to return
// the wrapped SpanContext or TracerProvider that created it.
type nonRecordingSpan struct {
embedded.Span
// tracer is the SDK tracer that created this span.
tracer *tracer
sc trace.SpanContext
}
var _ trace.Span = nonRecordingSpan{}
// SpanContext returns the wrapped SpanContext.
func (s nonRecordingSpan) SpanContext() trace.SpanContext { return s.sc }
// IsRecording always returns false.
func (nonRecordingSpan) IsRecording() bool { return false }
// SetStatus does nothing.
func (nonRecordingSpan) SetStatus(codes.Code, string) {}
// SetError does nothing.
func (nonRecordingSpan) SetError(bool) {}
// SetAttributes does nothing.
func (nonRecordingSpan) SetAttributes(...attribute.KeyValue) {}
// End does nothing.
func (nonRecordingSpan) End(...trace.SpanEndOption) {}
// RecordError does nothing.
func (nonRecordingSpan) RecordError(error, ...trace.EventOption) {}
// AddEvent does nothing.
func (nonRecordingSpan) AddEvent(string, ...trace.EventOption) {}
// AddLink does nothing.
func (nonRecordingSpan) AddLink(trace.Link) {}
// SetName does nothing.
func (nonRecordingSpan) SetName(string) {}
// TracerProvider returns the trace.TracerProvider that provided the Tracer
// that created this span.
func (s nonRecordingSpan) TracerProvider() trace.TracerProvider { return s.tracer.provider }
func isRecording(s SamplingResult) bool {
return s.Decision == RecordOnly || s.Decision == RecordAndSample
}
func isSampled(s SamplingResult) bool {
return s.Decision == RecordAndSample
}
// Status is the classified state of a Span.
type Status struct {
// Code is an identifier of a Spans state classification.
Code codes.Code
// Description is a user hint about why that status was set. It is only
// applicable when Code is Error.
Description string
}

View File

@ -0,0 +1,36 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import "context"
// SpanExporter handles the delivery of spans to external receivers. This is
// the final component in the trace export pipeline.
type SpanExporter interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// ExportSpans exports a batch of spans.
//
// This function is called synchronously, so there is no concurrency
// safety requirement. However, due to the synchronous calling pattern,
// it is critical that all timeouts and cancellations contained in the
// passed context must be honored.
//
// Any retry logic must be contained in this function. The SDK that
// calls this function will not implement any retry logic. All errors
// returned by this function are considered unrecoverable and will be
// reported to a configured error Handler.
ExportSpans(ctx context.Context, spans []ReadOnlySpan) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Shutdown notifies the exporter of a pending halt to operations. The
// exporter is expected to perform any cleanup or synchronization it
// requires while honoring all timeouts and cancellations contained in
// the passed context.
Shutdown(ctx context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}

View File

@ -0,0 +1,114 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import "go.opentelemetry.io/otel/sdk/internal/env"
const (
// DefaultAttributeValueLengthLimit is the default maximum allowed
// attribute value length, unlimited.
DefaultAttributeValueLengthLimit = -1
// DefaultAttributeCountLimit is the default maximum number of attributes
// a span can have.
DefaultAttributeCountLimit = 128
// DefaultEventCountLimit is the default maximum number of events a span
// can have.
DefaultEventCountLimit = 128
// DefaultLinkCountLimit is the default maximum number of links a span can
// have.
DefaultLinkCountLimit = 128
// DefaultAttributePerEventCountLimit is the default maximum number of
// attributes a span event can have.
DefaultAttributePerEventCountLimit = 128
// DefaultAttributePerLinkCountLimit is the default maximum number of
// attributes a span link can have.
DefaultAttributePerLinkCountLimit = 128
)
// SpanLimits represents the limits of a span.
type SpanLimits struct {
// AttributeValueLengthLimit is the maximum allowed attribute value length.
//
// This limit only applies to string and string slice attribute values.
// Any string longer than this value will be truncated to this length.
//
// Setting this to a negative value means no limit is applied.
AttributeValueLengthLimit int
// AttributeCountLimit is the maximum allowed span attribute count. Any
// attribute added to a span once this limit is reached will be dropped.
//
// Setting this to zero means no attributes will be recorded.
//
// Setting this to a negative value means no limit is applied.
AttributeCountLimit int
// EventCountLimit is the maximum allowed span event count. Any event
// added to a span once this limit is reached means it will be added but
// the oldest event will be dropped.
//
// Setting this to zero means no events we be recorded.
//
// Setting this to a negative value means no limit is applied.
EventCountLimit int
// LinkCountLimit is the maximum allowed span link count. Any link added
// to a span once this limit is reached means it will be added but the
// oldest link will be dropped.
//
// Setting this to zero means no links we be recorded.
//
// Setting this to a negative value means no limit is applied.
LinkCountLimit int
// AttributePerEventCountLimit is the maximum number of attributes allowed
// per span event. Any attribute added after this limit reached will be
// dropped.
//
// Setting this to zero means no attributes will be recorded for events.
//
// Setting this to a negative value means no limit is applied.
AttributePerEventCountLimit int
// AttributePerLinkCountLimit is the maximum number of attributes allowed
// per span link. Any attribute added after this limit reached will be
// dropped.
//
// Setting this to zero means no attributes will be recorded for links.
//
// Setting this to a negative value means no limit is applied.
AttributePerLinkCountLimit int
}
// NewSpanLimits returns a SpanLimits with all limits set to the value their
// corresponding environment variable holds, or the default if unset.
//
// • AttributeValueLengthLimit: OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT
// (default: unlimited)
//
// • AttributeCountLimit: OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT (default: 128)
//
// • EventCountLimit: OTEL_SPAN_EVENT_COUNT_LIMIT (default: 128)
//
// • AttributePerEventCountLimit: OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT (default:
// 128)
//
// • LinkCountLimit: OTEL_SPAN_LINK_COUNT_LIMIT (default: 128)
//
// • AttributePerLinkCountLimit: OTEL_LINK_ATTRIBUTE_COUNT_LIMIT (default: 128)
func NewSpanLimits() SpanLimits {
return SpanLimits{
AttributeValueLengthLimit: env.SpanAttributeValueLength(DefaultAttributeValueLengthLimit),
AttributeCountLimit: env.SpanAttributeCount(DefaultAttributeCountLimit),
EventCountLimit: env.SpanEventCount(DefaultEventCountLimit),
LinkCountLimit: env.SpanLinkCount(DefaultLinkCountLimit),
AttributePerEventCountLimit: env.SpanEventAttributeCount(DefaultAttributePerEventCountLimit),
AttributePerLinkCountLimit: env.SpanLinkAttributeCount(DefaultAttributePerLinkCountLimit),
}
}

View File

@ -0,0 +1,61 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"sync"
)
// SpanProcessor is a processing pipeline for spans in the trace signal.
// SpanProcessors registered with a TracerProvider and are called at the start
// and end of a Span's lifecycle, and are called in the order they are
// registered.
type SpanProcessor interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// OnStart is called when a span is started. It is called synchronously
// and should not block.
OnStart(parent context.Context, s ReadWriteSpan)
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// OnEnd is called when span is finished. It is called synchronously and
// hence not block.
OnEnd(s ReadOnlySpan)
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Shutdown is called when the SDK shuts down. Any cleanup or release of
// resources held by the processor should be done in this call.
//
// Calls to OnStart, OnEnd, or ForceFlush after this has been called
// should be ignored.
//
// All timeouts and cancellations contained in ctx must be honored, this
// should not block indefinitely.
Shutdown(ctx context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// ForceFlush exports all ended spans to the configured Exporter that have not yet
// been exported. It should only be called when absolutely necessary, such as when
// using a FaaS provider that may suspend the process after an invocation, but before
// the Processor can export the completed spans.
ForceFlush(ctx context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
type spanProcessorState struct {
sp SpanProcessor
state sync.Once
}
func newSpanProcessorState(sp SpanProcessor) *spanProcessorState {
return &spanProcessorState{sp: sp}
}
type spanProcessorStates []*spanProcessorState

153
vendor/go.opentelemetry.io/otel/sdk/trace/tracer.go generated vendored Normal file
View File

@ -0,0 +1,153 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"time"
"go.opentelemetry.io/otel/sdk/instrumentation"
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/otel/trace/embedded"
)
type tracer struct {
embedded.Tracer
provider *TracerProvider
instrumentationScope instrumentation.Scope
}
var _ trace.Tracer = &tracer{}
// Start starts a Span and returns it along with a context containing it.
//
// The Span is created with the provided name and as a child of any existing
// span context found in the passed context. The created Span will be
// configured appropriately by any SpanOption passed.
func (tr *tracer) Start(ctx context.Context, name string, options ...trace.SpanStartOption) (context.Context, trace.Span) {
config := trace.NewSpanStartConfig(options...)
if ctx == nil {
// Prevent trace.ContextWithSpan from panicking.
ctx = context.Background()
}
// For local spans created by this SDK, track child span count.
if p := trace.SpanFromContext(ctx); p != nil {
if sdkSpan, ok := p.(*recordingSpan); ok {
sdkSpan.addChild()
}
}
s := tr.newSpan(ctx, name, &config)
if rw, ok := s.(ReadWriteSpan); ok && s.IsRecording() {
sps := tr.provider.getSpanProcessors()
for _, sp := range sps {
sp.sp.OnStart(ctx, rw)
}
}
if rtt, ok := s.(runtimeTracer); ok {
ctx = rtt.runtimeTrace(ctx)
}
return trace.ContextWithSpan(ctx, s), s
}
type runtimeTracer interface {
// runtimeTrace starts a "runtime/trace".Task for the span and
// returns a context containing the task.
runtimeTrace(ctx context.Context) context.Context
}
// newSpan returns a new configured span.
func (tr *tracer) newSpan(ctx context.Context, name string, config *trace.SpanConfig) trace.Span {
// If told explicitly to make this a new root use a zero value SpanContext
// as a parent which contains an invalid trace ID and is not remote.
var psc trace.SpanContext
if config.NewRoot() {
ctx = trace.ContextWithSpanContext(ctx, psc)
} else {
psc = trace.SpanContextFromContext(ctx)
}
// If there is a valid parent trace ID, use it to ensure the continuity of
// the trace. Always generate a new span ID so other components can rely
// on a unique span ID, even if the Span is non-recording.
var tid trace.TraceID
var sid trace.SpanID
if !psc.TraceID().IsValid() {
tid, sid = tr.provider.idGenerator.NewIDs(ctx)
} else {
tid = psc.TraceID()
sid = tr.provider.idGenerator.NewSpanID(ctx, tid)
}
samplingResult := tr.provider.sampler.ShouldSample(SamplingParameters{
ParentContext: ctx,
TraceID: tid,
Name: name,
Kind: config.SpanKind(),
Attributes: config.Attributes(),
Links: config.Links(),
})
scc := trace.SpanContextConfig{
TraceID: tid,
SpanID: sid,
TraceState: samplingResult.Tracestate,
}
if isSampled(samplingResult) {
scc.TraceFlags = psc.TraceFlags() | trace.FlagsSampled
} else {
scc.TraceFlags = psc.TraceFlags() &^ trace.FlagsSampled
}
sc := trace.NewSpanContext(scc)
if !isRecording(samplingResult) {
return tr.newNonRecordingSpan(sc)
}
return tr.newRecordingSpan(psc, sc, name, samplingResult, config)
}
// newRecordingSpan returns a new configured recordingSpan.
func (tr *tracer) newRecordingSpan(psc, sc trace.SpanContext, name string, sr SamplingResult, config *trace.SpanConfig) *recordingSpan {
startTime := config.Timestamp()
if startTime.IsZero() {
startTime = time.Now()
}
s := &recordingSpan{
// Do not pre-allocate the attributes slice here! Doing so will
// allocate memory that is likely never going to be used, or if used,
// will be over-sized. The default Go compiler has been tested to
// dynamically allocate needed space very well. Benchmarking has shown
// it to be more performant than what we can predetermine here,
// especially for the common use case of few to no added
// attributes.
parent: psc,
spanContext: sc,
spanKind: trace.ValidateSpanKind(config.SpanKind()),
name: name,
startTime: startTime,
events: newEvictedQueueEvent(tr.provider.spanLimits.EventCountLimit),
links: newEvictedQueueLink(tr.provider.spanLimits.LinkCountLimit),
tracer: tr,
}
for _, l := range config.Links() {
s.AddLink(l)
}
s.SetAttributes(sr.Attributes...)
s.SetAttributes(config.Attributes()...)
return s
}
// newNonRecordingSpan returns a new configured nonRecordingSpan.
func (tr *tracer) newNonRecordingSpan(sc trace.SpanContext) nonRecordingSpan {
return nonRecordingSpan{tracer: tr, sc: sc}
}

9
vendor/go.opentelemetry.io/otel/sdk/trace/version.go generated vendored Normal file
View File

@ -0,0 +1,9 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/sdk/trace"
// version is the current release version of the metric SDK in use.
func version() string {
return "1.16.0-rc.1"
}

9
vendor/go.opentelemetry.io/otel/sdk/version.go generated vendored Normal file
View File

@ -0,0 +1,9 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package sdk // import "go.opentelemetry.io/otel/sdk"
// Version is the current release version of the OpenTelemetry SDK in use.
func Version() string {
return "1.28.0"
}