GithubHelp home page GithubHelp logo

open-telemetry / opentelemetry-lambda Goto Github PK

View Code? Open in Web Editor NEW
251.0 20.0 149.0 2.37 MB

Create your own Lambda Layer in each OTel language using this starter code. Add the Lambda Layer to your Lamdba Function to get tracing with OpenTelemetry.

Home Page: https://opentelemetry.io

License: Apache License 2.0

Makefile 2.45% Go 60.69% Java 5.05% Shell 7.31% HCL 12.09% Python 6.93% JavaScript 0.75% TypeScript 3.00% Dockerfile 0.43% C# 0.89% Ruby 0.42%
aws-lambda opentelemetry

opentelemetry-lambda's Introduction

OpenTelemetry Lambda

GitHub Java Workflow Status GitHub Collector Workflow Status GitHub NodeJS Workflow Status GitHub Terraform Lint Workflow Status GitHub Python Pull Request Workflow Status

OpenTelemetry Lambda Layers

The OpenTelemetry Lambda Layers provide the OpenTelemetry (OTel) code to export telemetry asynchronously from AWS Lambdas. It does this by embedding a stripped-down version of OpenTelemetry Collector Contrib inside an AWS Lambda Extension Layer.

Some layers include the corresponding OTel language SDK for the Lambda. This allows Lambdas to use OpenTelemetry to send traces and metrics to any configured backend.

Extension Layer Language Support

FAQ

  • What exporters/receivers/processors are included from the OpenTelemetry Collector?

    You can check out the stripped-down collector's imports in this repository for a full list of currently included components.

  • Is the Lambda layer provided or do I need to build it and distribute it myself?

    This repository does not provide pre-build Lambda layers. They must be built manually and saved in your AWS account. This repo has files to facilitate doing that. More information is provided in the Collector folder's README.

Design Proposal

To get a better understanding of the proposed design for the OpenTelemetry Lambda extension, you can see the Design Proposal here.

Features

The following is a list of features provided by the OpenTelemetry layers.

OpenTelemetry collector

The layer includes the OpenTelemetry Collector as a Lambda extension.

Custom context propagation carrier extraction

Context can be propagated through various mechanisms (e.g. http headers (APIGW), message attributes (SQS), ...). In some cases, it may be required to pass a custom context propagation extractor in lambda through configuration, this feature allows this through Lambda instrumentation configuration.

X-Ray Env Var Span Link

This links a context extracted from the Lambda runtime environment to the instrumentation-generated span rather than disabling that context extraction entirely.

Semantic conventions

The Lambda language implementation follows the semantic conventions specified in the OpenTelemetry Specification.

Auto instrumentation

The Lambda layer includes support for automatically instrumentation code via the use of instrumentation libraries.

Flush TracerProvider

The Lambda instrumentation will flush the TracerProvider at the end of an invocation.

Flush MeterProvider

The Lambda instrumentation will flush the MeterProvider at the end of an invocation.

Support matrix

The table below captures the state of various features and their levels of support different runtimes.

Feature Node Python Java .NET Go Ruby
OpenTelemetry collector + + + + +
Custom context propagation + - - - N/A
X-Ray Env Var Span Link - - - - N/A
Semantic Conventions^ + + + N/A
- Trace General^1 + + + N/A
- Trace Incoming^2 - - + N/A
- Trace Outgoing^3 + - + N/A
- Metrics^4 - - - N/A
Auto instrumentation + + - N/A
Flush TracerProvider + + + +
Flush MeterProvider + +

Legend

  • + is supported
  • - not supported
  • ^ subject to change depending on spec updates
  • N/A not applicable to the particular language
  • blank cell means the status of the feature is not known.

The following are runtimes which are no longer or not yet supported by this repository:

Contributing

See the Contributing Guide for details.

Here is a list of community roles with current and previous members:

Learn more about roles in the community repository.

opentelemetry-lambda's People

Contributors

adcharre avatar aneurysm9 avatar anuraaga avatar bhautikpip avatar bryan-aguilar avatar chrisrichardsevergreen avatar dependabot[bot] avatar humivo avatar johnbley avatar kausik-a avatar kuba-wu avatar kxyr avatar lupengamzn avatar martinkuba avatar mat-rumian avatar nathanielrn avatar nslaughter avatar ocelotl avatar povilasv avatar r0mdau avatar rakyll avatar rapphil avatar rashmiram avatar samimusallam avatar serkan-ozal avatar tylerbenson avatar vasireddy99 avatar wangzlei avatar willarmiros avatar xuan-cao-swi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opentelemetry-lambda's Issues

Unknown processor type "name" for name

Overview

I receive an error cannot load configuration: unknown processors type "insert" for insert when I try to use a OpenTelemetry config that includes a processor section to add attributes (regardless of whether it is referenced in a pipeline)

Sample Configuration

receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "$HONEYCOMB_WRITE_KEY"
      "x-honeycomb-dataset": "$HONEYCOMB_DATASET"
  logging:
  awsxray:

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [awsxray, otlp]
    metrics:
      receivers: [otlp]
      exporters: [logging]

Relevant Lambda logs

START RequestId: 0ff5f447-5f83-415d-8066-be50193a01e6 Version: $LATEST
2021/07/26 20:03:09 [collector] Launching OpenTelemetry Lambda extension, version:  v0.1.0
2021-07-26T20:03:09.359Z	info	service/collector.go:262	Starting otelcol...	{
    "Version": "v0.1.0",
    "NumCPU": 2
}
2021-07-26T20:03:09.370Z	info	service/collector.go:170	Setting up own telemetry...
2021-07-26T20:03:09.370Z	info	service/collector.go:205	Loading configuration...
Error: cannot load configuration: unknown processors type "insert" for insert
EXTENSION	Name: collector	State: Started	Events: []
END RequestId: 0ff5f447-5f83-415d-8066-be50193a01e6
REPORT RequestId: 0ff5f447-5f83-415d-8066-be50193a01e6	Duration: 10010.84 ms	Billed Duration: 10000 ms	Memory Size: 512 MB	Max Memory Used: 49 MB	
XRAY TraceId: 1-60ff14f2-5e0274d9775e9c3a239e57a5	SegmentId: 56412fd70f3e6881	Sampled: true	

Based on the behavior of the Lambda components that are initialized as factory methods, especially attributesprocessor, I would expect this to correctly add attributes to spans.

I have confirmed that when this section is used for a local OTLP exporter that it behaves as expected.

Relevant versions

OpenTelemetry Lambda layer version: arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-python38-ver-1-3-0:1
OpenTelemetry SDK version: 1.3.0
CollectorVersion: v0.1.0

Expected behavior:

  1. I can reference a processor in my trace pipeline

Context propagation differences

I've observed many differences in case of context propagation between instrumentations and type of invoke.

Reqs:

  • AWS XRAY Disabled on lambdas
  • OTEL_TRACES_SAMPLER = always_on
Instrumentation INSTRUMENTED CALL - CLIENT XRAY PROPAGATION SET INSTRUMENTED CALL - CLIENT NO XRAY NOT INSTRUMENTED CALL
Python (invoke) Single trace, context propagated Two traces are created, context not propagated, lambda trace root span pointing to the unknown parent Single trace, root span pointing to unknown parent
NodeJS (invoke) Single trace, context propagated Two traces created, context not propagated, lambda trace root span pointing to the unknown parent Single trace, root span pointing to unknown parent
Java Wrapper (invoke) Single trace, context propagated Two traces create (client, lambda), lambda trace root span is correct. There's no unknown parent span. Single trace created, root span doesn't have a parent
------------- ------------- ------------- -------------
Python (API CURL) Single trace, context propagated, but lambda span points to unknown parent Two traces are created, context not propagated, lambda trace root span pointing to the unknown parent Single trace, root span pointing to unknown parent
NodeJS (API CURL) Single trace, context propagated, but lambda span points to unknown parent Single trace, context propagated Single trace, root span pointing to unknown parent
Java Wrapper (API CURL) Single trace, context propagated, but lambda span points to unknown parent Single trace, context propagated Single trace created, root span doesn't have a parent

I think the way it should work for OT in this case should be:

  • not instrumented call to the lambda should generate single trace without parent in root span (e.g. if OTEL_TRACES_SAMPLER=always_on)
  • instrumented call to the lambda should generate single trace with context propagated

Support build Lambda Collector extension layer on demand

At the moment Lambda Collector extension layer contains very limit components due to the the reason described here.

Adding more collector components into code is cautious because we don't want to inflate the binary size, and also for neutrality we suggest Vendor maintains its downstream repo. For example aws-otel-lambda contains AWS XRay exporter and EMF exporter.

Ideally, we want to provide a mechanism to let user easily build a custom collector extension layer based on his demand. Here give an option by using opentelemetry-collector-builder, so user can dynamically build and publish custom lambda layer to his individul AWS account:

  1. download this repo
  2. go to collector folder
  3. create his Collector config custom.yml under folder collector
  4. run make publishpublish-layer

Use opentelemetry-instrument script to invoke python app

Currently, we have a wrapper script for initializing the python function

https://github.com/open-telemetry/opentelemetry-lambda/blob/main/python/src/otel/otel_sdk/otel_wrapper.py

I'm wondering if there is any known issue with instead invoking the standard opentelemetry-instrument script?

https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-instrumentation#opentelemetry-instrument

Perhaps it's related to the two TODOs?

https://github.com/open-telemetry/opentelemetry-lambda/blob/main/python/src/otel/otel_sdk/otel_wrapper.py#L15

If we could leverage opentelemetry-instrument, it ensures there is only one consistent and officially maintained method for enabling Python instrumentation.

Note I am thinking of this in the context of adding auto instrumentation to k8s - I think the concept of updating the runtime for auto instrumentation in a k8s pod and a lambda function are almost, if not always, equivalent.

open-telemetry/opentelemetry-operator#455

@wangzlei @NathanielRN Any ideas?

Adjust collector extension log level

All logs in AWS Lambda output to one CloudWatch log group, the collector extension logs will mess up with user's Lambda application logs.

Collector extension should support adjust collector log level. So, user can enable/disable collector log by configuration.

Unable to add layer

Been trying to test this but it keeps failing with the following error when I try and attach the layer

not authorized to perform: lambda:GetLayerVersion on resource: arn:aws:lambda:eu-west-3:297975325230:layer:opentelemetry-lambda-extension:8

I am using an administrator account with "full access" but it still fails.

Also tried it via the UI and get the same message "You are not authorized to perform: lambda:GetLayerVersion."

Looks like we are not able to query the version of the layer

Unknown exporters type - what types are supported?

Hi,

I followed these steps https://github.com/open-telemetry/opentelemetry-lambda/blob/main/collector/README.md to publish the OpenTelemetry Collector Lambda layer from scratch.

However I noticed https://github.com/open-telemetry/opentelemetry-lambda/blob/main/collector/config.yaml is missing X-Ray so I defined a custom config (as described here: https://aws-otel.github.io/docs/getting-started/x-ray#configuring-the-aws-x-ray-exporter), looks similar to this https://github.com/aws-observability/aws-otel-lambda/blob/main/adot/collector/config.yaml in the end.

However it doesn't work:

info	service/application.go:277	Starting otelcol...	{"Version": "v0.1.0", "NumCPU": 2}
info	service/application.go:185	Setting up own telemetry...
info	service/application.go:220	Loading configuration...
Error: cannot load configuration: unknown exporters type "awsxray" for awsxray

If I switch from awsxray to jaeger, I get Error: cannot load configuration: unknown exporters type "jaeger" for jaeger. So I wonder what am I missing? What exporters are actually supported by the Lambda layer collector?

[Python] Support for `OTEL_PROPAGATORS` setting beyond AWS X-Ray

Moving the issue from the original issue in aws-otel-lambda.

I am using v1.3.0 of the Python layer and configured the lambda to use tracecontext propagation, but I can see the lambda reporting an additional extra span that seems to be related to X-Ray, which shouldn't be there.

Notable additional config:

    OTEL_PROPAGATORS: "tracecontext"
    OTEL_TRACES_SAMPLER: "Always_On"

Lambda tracing is turned off (PassThrough).

Code:

def consumer(event, lambda_context):

    context = extract(event['headers'])

    # Create top level transaction representing the lambda work
    with tracer.start_as_current_span("consumer-function-top-level", context=context, kind=SpanKind.SERVER):

        # Some other internal work performed by the lambda
        with tracer.start_as_current_span("consumer-function-some-internal-work", kind=SpanKind.INTERNAL):

            time.sleep(1)

        time.sleep(0.5)

    return {'statusCode': 200}

This should produce 2 spans: top-level consumer-function-top-level and nested consumer-function-some-internal-work. Instead, I am seeing 3 spans created in CloudWatch log with Span 2 065c07aa6cc4916d that shouldn't be there:



2021-07-24T04:36:51.316Z	INFO	loggingexporter/logging_exporter.go:41	TracesExporter	{     "#spans": 3 }
--
2021-07-24T04:36:51.316Z	DEBUG	loggingexporter/logging_exporter.go:51	ResourceSpans #0
Resource labels:
-> telemetry.sdk.language: STRING(python)
-> telemetry.sdk.name: STRING(opentelemetry)
-> telemetry.sdk.version: STRING(1.3.0)
-> cloud.region: STRING(ap-southeast-2)
-> cloud.provider: STRING(aws)
-> faas.name: STRING(aws-python-api-worker-project-dev-consumer)
-> faas.version: STRING($LATEST)
-> service.name: STRING(aws-python-api-worker-project-dev-consumer)
InstrumentationLibrarySpans #0
InstrumentationLibrary handler

Span #0
Trace ID       : ac4d7110c6d22e985b5db28e8ecca769
Parent ID      : 3c374013084a679c
ID             : 522890b38766e922
Name           : consumer-function-some-internal-work
Kind           : SPAN_KIND_INTERNAL
Start time     : 2021-07-24 04:36:49.813522476 +0000 UTC
End time       : 2021-07-24 04:36:50.814894967 +0000 UTC
Status code    : STATUS_CODE_UNSET
Status message :

Span #1
Trace ID       : ac4d7110c6d22e985b5db28e8ecca769
Parent ID      : e24504245b7cff0e
ID             : 3c374013084a679c
Name           : consumer-function-top-level
Kind           : SPAN_KIND_SERVER
Start time     : 2021-07-24 04:36:48.812135568 +0000 UTC
End time       : 2021-07-24 04:36:51.315562096 +0000 UTC
Status code    : STATUS_CODE_UNSET
Status message :

Span #2 <==== EXTRA SPAN!!!
Trace ID       : ac4d7110c6d22e985b5db28e8ecca769
Parent ID      : ad1bb604de1da312
ID             : 065c07aa6cc4916d
Name           : handler.consumer
Kind           : SPAN_KIND_SERVER
Start time     : 2021-07-24 04:36:48.811673087 +0000 UTC
End time       : 2021-07-24 04:36:51.315622948 +0000 UTC
Status code    : STATUS_CODE_UNSET
Status message :
Attributes:
-> faas.execution: STRING(e759abdb-6119-4f91-afe0-716a71276825)
-> faas.id: STRING(arn:aws:lambda:ap-southeast-2:401722391821:function:aws-python-api-worker-project-dev-consumer)
-> faas.name: STRING(aws-python-api-worker-project-dev-consumer)
-> faas.version: STRING($LATEST)


By the looks of it, the Python implementation doesn't support autodiscovery and trace context propagation assuming the propagation is done using only AWS X-Ray trace context propagation.

Node.js lambda layer - instrumentation error

Steps to reproduce:

  • build and deploy layer
  • create sample JS lambda
  • add layer
  • configure AWS_LAMBDA_EXEC_WRAPPER
    Result: Uncaught Exception {"errorType":"Runtime.ImportModuleError(...)

The problem was fixed by open-telemetry/opentelemetry-js#2450 which was released with "@opentelemetry/instrumentation": "0.26.0". I manually changed the dependency and it works.

Propsed solution:

@anuraaga who could help with that? Especially releasing opentelemetry-instrumentation-aws-lambda - I'm green in this respect.

Ability to set log-level of the collector

OTel collector only accepts log-level as a command line flag. But since extensions are automatically started by lambda, there doesn't seem to be any way to change this. Being able to set log-level is important when debugging issues such as connectivity to backends, creds, etc. Perhaps we need to change the extension from the Go binary itself to a shell script which can have arguments passed via environment variable.

Add Badges for Code Coverage and CI Status

Is your feature request related to a problem?
Most of the existing OpenTelemetry repositories display a badge for CI status and code coverage. In an effort to have the lambda repository to be up to date and consistent with the other established SDKs and repositories, these badges should be included at the top of the main README.md file.

Describe the solution you'd like.
As a developer contributing to OpenTelemetry, I recommend adding a code coverage percentage and CI status badge at the top of the README document of the Lambda repo. These badges are a common feature of many modern open source projects, which improves readability and convenience to developers. By adding a badge to the README.md, with a quick scan, any observer will be able to know the status of the repository.

cc: @alolita

Change to collector causing issues with nodejs

I did some digging to figure out what exactly was causing the issue I was seeing and it appears to be related to PR #194.

I am pretty new to go so I really don't know why this causes an issue ๐Ÿ˜… Also I only tested this with nodejs but it is possible it is causing trouble in other environment I just haven't tested those ๐Ÿคทโ€โ™‚๏ธ

Here are the steps I took to build everything and a sample of the Cloudwatch logs I am seeing

  • run make publish-layer in the collector folder to have a local build of the collector
  • run npm i in the nodejs folder
  • Run terraform init in the nodejs/integration-tests/aws-sdk/wrapper
  • Run terraform apply to deploy
  • Run a GET request on the endpoint that was exposed on API gateway.

Then in CloudWatch I am just seeing the following followed by a timeout from lambda and API gateway

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   timestamp   |                                                                                                        message                                                                                                        |
|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1640891812819 | 2021/12/30 19:16:52 [collector] Launching OpenTelemetry Lambda extension, version:  v0.1.0                                                                                                                            |
| 1640891812834 | 2021-12-30T19:16:52.834Z info service/collector.go:186 Applying configuration...                                                                                                                                      |
| 1640891812835 | 2021-12-30T19:16:52.834Z info builder/exporters_builder.go:254 Exporter was built. {"kind": "exporter", "name": "logging"}                                                                                            |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/pipelines_builder.go:222 Pipeline was built. {"name": "pipeline", "name": "metrics"}                                                                                            |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/pipelines_builder.go:222 Pipeline was built. {"name": "pipeline", "name": "traces"}                                                                                             |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/receivers_builder.go:224 Receiver was built. {"kind": "receiver", "name": "otlp", "datatype": "metrics"}                                                                        |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/receivers_builder.go:224 Receiver was built. {"kind": "receiver", "name": "otlp", "datatype": "traces"}                                                                         |
| 1640891812835 | 2021-12-30T19:16:52.835Z info service/service.go:86 Starting extensions...                                                                                                                                            |
| 1640891812835 | 2021-12-30T19:16:52.835Z info service/service.go:91 Starting exporters...                                                                                                                                             |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/exporters_builder.go:40 Exporter is starting... {"kind": "exporter", "name": "logging"}                                                                                         |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/exporters_builder.go:48 Exporter started. {"kind": "exporter", "name": "logging"}                                                                                               |
| 1640891812835 | 2021-12-30T19:16:52.835Z info service/service.go:96 Starting processors...                                                                                                                                            |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/pipelines_builder.go:54 Pipeline is starting... {"name": "pipeline", "name": "metrics"}                                                                                         |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/pipelines_builder.go:65 Pipeline is started. {"name": "pipeline", "name": "metrics"}                                                                                            |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/pipelines_builder.go:54 Pipeline is starting... {"name": "pipeline", "name": "traces"}                                                                                          |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/pipelines_builder.go:65 Pipeline is started. {"name": "pipeline", "name": "traces"}                                                                                             |
| 1640891812835 | 2021-12-30T19:16:52.835Z info service/service.go:101 Starting receivers...                                                                                                                                            |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/receivers_builder.go:68 Receiver is starting... {"kind": "receiver", "name": "otlp"}                                                                                            |
| 1640891812835 | 2021-12-30T19:16:52.835Z info otlpreceiver/otlp.go:68 Starting GRPC server on endpoint 0.0.0.0:4317 {"kind": "receiver", "name": "otlp"}                                                                              |
| 1640891812835 | 2021-12-30T19:16:52.835Z info otlpreceiver/otlp.go:86 Starting HTTP server on endpoint 0.0.0.0:4318 {"kind": "receiver", "name": "otlp"}                                                                              |
| 1640891812835 | 2021-12-30T19:16:52.835Z info otlpreceiver/otlp.go:141 Setting up a second HTTP listener on legacy endpoint 0.0.0.0:55681 {"kind": "receiver", "name": "otlp"}                                                        |
| 1640891812835 | 2021-12-30T19:16:52.835Z info otlpreceiver/otlp.go:86 Starting HTTP server on endpoint 0.0.0.0:55681 {"kind": "receiver", "name": "otlp"}                                                                             |
| 1640891812835 | 2021-12-30T19:16:52.835Z info builder/receivers_builder.go:73 Receiver started. {"kind": "receiver", "name": "otlp"}                                                                                                  |
| 1640891812835 | 2021-12-30T19:16:52.835Z info service/telemetry.go:92 Setting up own telemetry...                                                                                                                                     |
| 1640891812836 | 2021-12-30T19:16:52.836Z info service/telemetry.go:116 Serving Prometheus metrics {"address": ":8888", "level": "basic", "service.instance.id": "27f930a5-d42a-4826-af98-8b11dc15fec2", "service.version": "latest"}  |
| 1640891812836 | 2021-12-30T19:16:52.836Z info service/collector.go:235 Starting otelcol... {"Version": "v0.1.0", "NumCPU": 2}                                                                                                         |
| 1640891812836 | 2021-12-30T19:16:52.836Z info service/collector.go:131 Everything is ready. Begin running and processing data.                                                                                                        |
| 1640891822826 | START RequestId: 80ac2185-73a7-4743-8b6d-5b4edb2d1ee4 Version: $LATEST                                                                                                                                                |
| 1640891823432 | 2021/12/30 19:17:03 [collector] Launching OpenTelemetry Lambda extension, version:  v0.1.0                                                                                                                            |
| 1640891823469 | 2021-12-30T19:17:03.469Z info service/collector.go:186 Applying configuration...                                                                                                                                      |
| 1640891823469 | 2021-12-30T19:17:03.469Z info builder/exporters_builder.go:254 Exporter was built. {"kind": "exporter", "name": "logging"}                                                                                            |
| 1640891823469 | 2021-12-30T19:17:03.469Z info builder/pipelines_builder.go:222 Pipeline was built. {"name": "pipeline", "name": "traces"}                                                                                             |
| 1640891823469 | 2021-12-30T19:17:03.469Z info builder/pipelines_builder.go:222 Pipeline was built. {"name": "pipeline", "name": "metrics"}                                                                                            |
| 1640891823469 | 2021-12-30T19:17:03.469Z info builder/receivers_builder.go:224 Receiver was built. {"kind": "receiver", "name": "otlp", "datatype": "traces"}                                                                         |
| 1640891823470 | 2021-12-30T19:17:03.469Z info builder/receivers_builder.go:224 Receiver was built. {"kind": "receiver", "name": "otlp", "datatype": "metrics"}                                                                        |
| 1640891823470 | 2021-12-30T19:17:03.469Z info service/service.go:86 Starting extensions...                                                                                                                                            |
| 1640891823470 | 2021-12-30T19:17:03.469Z info service/service.go:91 Starting exporters...                                                                                                                                             |
| 1640891823470 | 2021-12-30T19:17:03.469Z info builder/exporters_builder.go:40 Exporter is starting... {"kind": "exporter", "name": "logging"}                                                                                         |
| 1640891823470 | 2021-12-30T19:17:03.470Z info builder/exporters_builder.go:48 Exporter started. {"kind": "exporter", "name": "logging"}                                                                                               |
| 1640891823470 | 2021-12-30T19:17:03.470Z info service/service.go:96 Starting processors...                                                                                                                                            |
| 1640891823470 | 2021-12-30T19:17:03.470Z info builder/pipelines_builder.go:54 Pipeline is starting... {"name": "pipeline", "name": "traces"}                                                                                          |
| 1640891823470 | 2021-12-30T19:17:03.470Z info builder/pipelines_builder.go:65 Pipeline is started. {"name": "pipeline", "name": "traces"}                                                                                             |
| 1640891823470 | 2021-12-30T19:17:03.470Z info builder/pipelines_builder.go:54 Pipeline is starting... {"name": "pipeline", "name": "metrics"}                                                                                         |
| 1640891823470 | 2021-12-30T19:17:03.470Z info builder/pipelines_builder.go:65 Pipeline is started. {"name": "pipeline", "name": "metrics"}                                                                                            |
| 1640891823470 | 2021-12-30T19:17:03.470Z info service/service.go:101 Starting receivers...                                                                                                                                            |
| 1640891823470 | 2021-12-30T19:17:03.470Z info builder/receivers_builder.go:68 Receiver is starting... {"kind": "receiver", "name": "otlp"}                                                                                            |
| 1640891823470 | 2021-12-30T19:17:03.470Z info otlpreceiver/otlp.go:68 Starting GRPC server on endpoint 0.0.0.0:4317 {"kind": "receiver", "name": "otlp"}                                                                              |
| 1640891823470 | 2021-12-30T19:17:03.470Z info otlpreceiver/otlp.go:86 Starting HTTP server on endpoint 0.0.0.0:4318 {"kind": "receiver", "name": "otlp"}                                                                              |
| 1640891823470 | 2021-12-30T19:17:03.470Z info otlpreceiver/otlp.go:141 Setting up a second HTTP listener on legacy endpoint 0.0.0.0:55681 {"kind": "receiver", "name": "otlp"}                                                        |
| 1640891823470 | 2021-12-30T19:17:03.470Z info otlpreceiver/otlp.go:86 Starting HTTP server on endpoint 0.0.0.0:55681 {"kind": "receiver", "name": "otlp"}                                                                             |
| 1640891823470 | 2021-12-30T19:17:03.470Z info builder/receivers_builder.go:73 Receiver started. {"kind": "receiver", "name": "otlp"}                                                                                                  |
| 1640891823470 | 2021-12-30T19:17:03.470Z info service/telemetry.go:92 Setting up own telemetry...                                                                                                                                     |
| 1640891823471 | 2021-12-30T19:17:03.471Z info service/telemetry.go:116 Serving Prometheus metrics {"address": ":8888", "level": "basic", "service.instance.id": "0c59d806-ac4d-4b29-88e7-55cfea4607df", "service.version": "latest"}  |
| 1640891823471 | 2021-12-30T19:17:03.471Z info service/collector.go:235 Starting otelcol... {"Version": "v0.1.0", "NumCPU": 2}                                                                                                         |
| 1640891823471 | 2021-12-30T19:17:03.471Z info service/collector.go:131 Everything is ready. Begin running and processing data.                                                                                                        |
| 1640891842847 | EXTENSION Name: collector State: Started Events: []                                                                                                                                                                   |
| 1640891842847 | END RequestId: 80ac2185-73a7-4743-8b6d-5b4edb2d1ee4                                                                                                                                                                   |
| 1640891842847 | REPORT RequestId: 80ac2185-73a7-4743-8b6d-5b4edb2d1ee4 Duration: 20020.53 ms Billed Duration: 20000 ms Memory Size: 384 MB Max Memory Used: 31 MB                                                                     |
| 1640891842847 | 2021-12-30T19:17:22.847Z 80ac2185-73a7-4743-8b6d-5b4edb2d1ee4 Task timed out after 20.02 seconds                                                                                                                      |
| 1640891843166 | 2021/12/30 19:17:23 [collector] Launching OpenTelemetry Lambda extension, version:  v0.1.0                                                                                                                            |
| 1640891843196 | 2021-12-30T19:17:23.196Z info service/collector.go:186 Applying configuration...                                                                                                                                      |
| 1640891843196 | 2021-12-30T19:17:23.196Z info builder/exporters_builder.go:254 Exporter was built. {"kind": "exporter", "name": "logging"}                                                                                            |
| 1640891843196 | 2021-12-30T19:17:23.196Z info builder/pipelines_builder.go:222 Pipeline was built. {"name": "pipeline", "name": "traces"}                                                                                             |
| 1640891843196 | 2021-12-30T19:17:23.196Z info builder/pipelines_builder.go:222 Pipeline was built. {"name": "pipeline", "name": "metrics"}                                                                                            |
| 1640891843196 | 2021-12-30T19:17:23.196Z info builder/receivers_builder.go:224 Receiver was built. {"kind": "receiver", "name": "otlp", "datatype": "traces"}                                                                         |
| 1640891843197 | 2021-12-30T19:17:23.196Z info builder/receivers_builder.go:224 Receiver was built. {"kind": "receiver", "name": "otlp", "datatype": "metrics"}                                                                        |
| 1640891843197 | 2021-12-30T19:17:23.197Z info service/service.go:86 Starting extensions...                                                                                                                                            |
| 1640891843197 | 2021-12-30T19:17:23.197Z info service/service.go:91 Starting exporters...                                                                                                                                             |
| 1640891843197 | 2021-12-30T19:17:23.197Z info builder/exporters_builder.go:40 Exporter is starting... {"kind": "exporter", "name": "logging"}                                                                                         |
| 1640891843197 | 2021-12-30T19:17:23.197Z info builder/exporters_builder.go:48 Exporter started. {"kind": "exporter", "name": "logging"}                                                                                               |
| 1640891843197 | 2021-12-30T19:17:23.197Z info service/service.go:96 Starting processors...                                                                                                                                            |
| 1640891843197 | 2021-12-30T19:17:23.197Z info builder/pipelines_builder.go:54 Pipeline is starting... {"name": "pipeline", "name": "traces"}                                                                                          |
| 1640891843197 | 2021-12-30T19:17:23.197Z info builder/pipelines_builder.go:65 Pipeline is started. {"name": "pipeline", "name": "traces"}                                                                                             |
| 1640891843197 | 2021-12-30T19:17:23.197Z info builder/pipelines_builder.go:54 Pipeline is starting... {"name": "pipeline", "name": "metrics"}                                                                                         |
| 1640891843197 | 2021-12-30T19:17:23.197Z info builder/pipelines_builder.go:65 Pipeline is started. {"name": "pipeline", "name": "metrics"}                                                                                            |
| 1640891843197 | 2021-12-30T19:17:23.197Z info service/service.go:101 Starting receivers...                                                                                                                                            |
| 1640891843197 | 2021-12-30T19:17:23.197Z info builder/receivers_builder.go:68 Receiver is starting... {"kind": "receiver", "name": "otlp"}                                                                                            |
| 1640891843197 | 2021-12-30T19:17:23.197Z info otlpreceiver/otlp.go:68 Starting GRPC server on endpoint 0.0.0.0:4317 {"kind": "receiver", "name": "otlp"}                                                                              |
| 1640891843197 | 2021-12-30T19:17:23.197Z info otlpreceiver/otlp.go:86 Starting HTTP server on endpoint 0.0.0.0:4318 {"kind": "receiver", "name": "otlp"}                                                                              |
| 1640891843197 | 2021-12-30T19:17:23.197Z info otlpreceiver/otlp.go:141 Setting up a second HTTP listener on legacy endpoint 0.0.0.0:55681 {"kind": "receiver", "name": "otlp"}                                                        |
| 1640891843197 | 2021-12-30T19:17:23.197Z info otlpreceiver/otlp.go:86 Starting HTTP server on endpoint 0.0.0.0:55681 {"kind": "receiver", "name": "otlp"}                                                                             |
| 1640891843197 | 2021-12-30T19:17:23.197Z info builder/receivers_builder.go:73 Receiver started. {"kind": "receiver", "name": "otlp"}                                                                                                  |
| 1640891843197 | 2021-12-30T19:17:23.197Z info service/telemetry.go:92 Setting up own telemetry...                                                                                                                                     |
| 1640891843198 | 2021-12-30T19:17:23.198Z info service/telemetry.go:116 Serving Prometheus metrics {"address": ":8888", "level": "basic", "service.instance.id": "8dbe0f4d-2c62-406f-b5ad-cf9178feebe3", "service.version": "latest"}  |
| 1640891843198 | 2021-12-30T19:17:23.198Z info service/collector.go:235 Starting otelcol... {"Version": "v0.1.0", "NumCPU": 2}                                                                                                         |
| 1640891843198 | 2021-12-30T19:17:23.198Z info service/collector.go:131 Everything is ready. Begin running and processing data.                                                                                                        |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Originally posted by @Danwakeem in #199

[BUG] boto3 invoke method can not be instrumented

In case of lambda function invoking other function there is a problem with instrumentation of the invoke method.

Example lambda code:

import json
import os
import boto3

FUNCTION_NAME = os.getenv('INVOKE_FUNCTION_NAME')

client = boto3.client('lambda')


def invoke(sweets):
    print('Invoke %s with data: %s' % (FUNCTION_NAME, sweets))
    response = client.invoke(
        FunctionName=FUNCTION_NAME,
        InvocationType='RequestResponse',
        Payload=json.dumps(sweets),
    )
    return response


def get_sweets(event, context):
    body = event['body']

    print('Check if %s is available' % str(body))
    response = invoke(body)
    payload = json.loads(response["Payload"].read())

    return {'statusCode': payload['statusCode'], 'body': json.dumps(payload['body'])}

Error

[ERROR] AttributeError: 'str' object has no attribute 'get'
Traceback (most recent call last):
  File "/opt/python/opentelemetry/instrumentation/aws_lambda/__init__.py", line 222, in _instrumented_lambda_handler_call
    result = call_wrapped(*args, **kwargs)
  File "/var/task/check_sweets.py", line 24, in get_sweets
    response = invoke(body)
  File "/var/task/check_sweets.py", line 12, in invoke
    response = client.invoke(
  File "/var/task/botocore/client.py", line 386, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/opt/python/opentelemetry/instrumentation/botocore/__init__.py", line 215, in _patched_api_call
    BotocoreInstrumentor._patch_lambda_invoke(call_context.params)
  File "/opt/python/opentelemetry/instrumentation/botocore/__init__.py", line 178, in _patch_lambda_invoke
    headers = payload.get("headers", 
{}
)

Environment

  • AWS Lambda Python 3.8 runtime
  • opentelemetry-instrumentation 0.24b0 (tested also on 0.25b2)
  • opentelemetry-instrumentation-boto 0.24b0 (tested also on 0.25b2)
  • opentelemetry-instrumentation-botocore 0.24b0 (tested also on 0.25b2)
  • boto3 - tested on various versions the same result

Missing the Attributors Processor

Tried to use attribute processor and it doesn't seem to be compatible with Lambda/Node JS layer.

I tested the two configuration files below, but they didn't work. Is it possible to add to Lambda layer?

References:
https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/README.md#general-information
https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/attributesprocessor

Codes and Errors:
Error: Error: cannot load configuration: unknown processors type "attributes" for attributes

  otlp:
    protocols:
      grpc:
      http:

exporters:
  logging:
    loglevel: debug
  awsxray:
    index_all_attributes: true
    
processors:
  attributes:
    exclude:
      match_type: strict
      attributes:
        - key: otel_resource_process_executable_name

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: 
      - attributes
      exporters: [logging, awsxray]
    metrics:
      receivers: [otlp]
      exporters: [logging]

Error: Error: cannot load configuration: unknown processors type "attributes" for attributes/delete

  otlp:
    protocols:
      grpc:
      http:

exporters:
  logging:
    loglevel: debug
  awsxray:
    index_all_attributes: true
    
processors:
  attributes/delete:
    actions:
      - key: otel_resource_process_executable_name
        action: delete

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [attributes/delete]
      exporters: [logging, awsxray]
    metrics:
      receivers: [otlp]
      exporters: [logging]```

Delete python/src/otel/Makefile

python/src/otel/Makefile has commands like yum install - presumably this isn't for normal use. Do we still need it or can delete?

Add NathanielRN to repo approvers

Description

The AWS Observability team continues to make improvements to the OTel + AWS experience. I have had the chance to work on OpenTelemetry for the last year, and have made contributions to this repo as well. See:

#110
#148
#150
#124

I have experience contributing to OpenTelemetry repos, especially including OTel Python. Give all of this, I'm requesting to be added as an maintainer approver to this opentelemetry-lambda repo so I can help with reviews and have less resistance as I continue to work on new feature to the repo. I believe my experience will help me make positive contributions to this project.

Thank you! ๐Ÿ™‚

Tag @codeboten

Health check for collector extension

The collector may become unhealthy even without crashing due to some issue. If Lambda extensions have a mechanism for reporting healthyness, for example by exposing the health check HTTP endpoint at a well known path, we should.

Document lack of support for vanilla Java 8 runtime

Our layers don't work on the vanilla, non-corretto Java 8 runtime. This is likely because that runtime uses an old version of Java 8 (if the docs 1.8.0 is to be believed, it's the very first version of Java 8?). We should document we don't support this runtime as customers shouldn't be starting up new functions on a non-corretto runtime anyways.

Collector start error is never surfaced

ipp.appDone = make(chan struct{})
	go func() {
		defer close(ipp.appDone)
		appErr := ipp.svc.Run()
		if appErr != nil {
			err = appErr
		}
	}()

The err assignment here can happen after start function returns. This logic won't surface the ipp.svc.Run errors.

Unrecognized value for otel.propagators: b3. Make sure the artifact including the propagator is on the classpath.

I followed https://aws-otel.github.io/docs/getting-started/lambda/lambda-java and I'm now seeing:

io.opentelemetry.sdk.autoconfigure.ConfigurationException: Unrecognized value for otel.propagators: b3. Make sure the artifact including the propagator is on the classpath.
	at io.opentelemetry.sdk.autoconfigure.PropagatorConfiguration.getPropagator(PropagatorConfiguration.java:58)
	at io.opentelemetry.sdk.autoconfigure.PropagatorConfiguration.configurePropagators(PropagatorConfiguration.java:39)
	at io.opentelemetry.sdk.autoconfigure.OpenTelemetrySdkAutoConfiguration.initialize(OpenTelemetrySdkAutoConfiguration.java:40)
	at io.opentelemetry.instrumentation.awslambda.v1_0.TracingRequestWrapperBase.<init>(TracingRequestWrapperBase.java:23)
	at io.opentelemetry.instrumentation.awslambda.v1_0.TracingRequestWrapper.<init>(TracingRequestWrapper.java:16)

I'm using the Lambda layer:aws-otel-java-wrapper-ver-1-2-0:2. https://github.com/aws-observability/aws-otel-lambda/blob/main/java/scripts/otel-handler contains b3, so I'd assume the B3 propagator will be part of the classpath automatically.

Terraform apply does not work on the hello-awssdk lambda

โ•ท
โ”‚ Error: error waiting for Lambda Provisioned Concurrency Config (hello-java-awssdk-agent:provisioned) to be ready: status reason: FUNCTION_ERROR_INIT_FAILURE
โ”‚
โ”‚ with aws_lambda_provisioned_concurrency_config.lambda_api,
โ”‚ on main.tf line 54, in resource "aws_lambda_provisioned_concurrency_config" "lambda_api":
โ”‚ 54: resource "aws_lambda_provisioned_concurrency_config" "lambda_api" {
โ”‚
โ•ต

This used to work before, so something has changed recently that broke this sample

Add code samples to set attributes on current span

Update documentation/code samples to add attributes to an in-process span when using the OpenTelemetry Lambda layer? For example, when I add from opentelemetry import trace statement in a Lambda function that uses the OTel layer, I see the following error:

{
  "errorMessage": "'ProxyTracerProvider' object has no attribute 'force_flush'",
  "errorType": "AttributeError",
  "stackTrace": [
    "  File \"/var/task/wrapt/wrappers.py\", line 566, in __call__\n    return self._self_wrapper(self.__wrapped__, self._self_instance,\n",
    "  File \"/opt/python/opentelemetry/instrumentation/aws_lambda/__init__.py\", line 137, in _instrumented_lambda_handler_call\n    tracer_provider.force_flush()\n"
  ]
}

In a non-Lambda context, I might employ a similar strategy to set custom attributes on the current span (similar to the below).

from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import  ConsoleSpanExporter, SimpleSpanProcessor
import time
import os
# Resource can be required for some backends, e.g. Jaeger
# If resource wouldn't be set - traces wouldn't appears in Jaeger
resource = Resource(attributes={
    "service.name": "service"
})

trace.set_tracer_provider(TracerProvider(resource=resource))

trace.get_tracer_provider().add_span_processor(
    SimpleSpanProcessor(ConsoleSpanExporter())
)

tracer = trace.get_tracer(__name__)

with tracer.start_as_current_span("miguel-rex") as span:
    span.set_attribute("msg", "hello world")
    print("Hello world!")
    time.sleep(1)

Relevant Versions

OpenTelemetry Lambda layer version: arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-python38-ver-1-3-0:1
OpenTelemetry SDK version: 1.3.0
CollectorVersion: v0.1.0

Is 3rd party exporter in collector-contrib acceptable in Lambda Collector extension?

Refer discussion in #21 (comment)
Right now Lambda collector extension has only Collect-core exporters, such as otlpgrpcexporter, loggingexporter, etc.
Can we bring in popular exporters like awsxrayexporter, datadogexporter?

AWS has maintained a downstream Lambda repo, it publishes public lambda layer by replacing Collector to ADOT Collector for company security concern. That downstream public lambda layer supports lots popular 3rd party exporters in Collect-contrib. If otel-lambda only accept otlp exporter, it means 3rd party companies have to either contribute their exporters to aws-observability or maintaie their downstream repo.

Here we have questions:

  1. can we add 3rd party exporters from Collector-contrib to Lambda Collector extension?
  2. if yes, what criteria an exporter can be involved in?

Error on Lambda test: no such file or directory

I'm getting an error in my Lambda when including the self-built lambda layer:

{
  "errorMessage": "RequestId: 16cb64d6-a350-4939-8dd8-9148f8d934f1 Error: fork/exec /opt/extensions/opentelemetry-lambda-extension: no such file or directory",
  "errorType": "Extension.LaunchError"
}

Steps:

  1. Makefile build

ZIP File of step1 has the content:

extensions/opentelemetry-lambda-extension
  1. Makefile publish
    Layer is created in the account + region

  2. adding layer to a lambda function with hello-world content with NodeJS Runtime 12

  3. adding the collector.yaml in the root of the function
    content:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: localhost:55680

exporters:
  awsxray:
  awsemf:

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [awsxray]
    metrics:
      receivers: [otlp]
      exporters: [awsemf]
  1. setting env variable:
    OPENTELEMETRY_COLLECTOR_CONFIG_FILE=/var/task/collector.yaml

  2. hit test

[enhancement] Add OpenTelemetry context propagation as default

As it is OpenTelemetry Lambda instrumentation project which is using OpenTelemetry SDKs it would be nice to have an OT context propagation enabled by default in the AWS Lambda instrumentation. Current implementation of the instrumentation requires to call Lambda function with AWSXray context propagation. This could help to avoid holes/remote spans in the traces if end user is not aware of AWSXray context propagation.

Question: Collector not sending data

Our organisation is running a few tests to see if this is a viable option, but we've run into some teething issues, the collector receives data, but none of that data is sent to exporters.

Just wondering what we can look at. We're using the following conf

receivers:
  otlp:
    protocols:
      grpc: # on port 55680
      http: # on port 55681

processors:
  batch:

exporters:
  otlp:
    endpoint: "https://api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "****redacted***"
      "x-honeycomb-dataset": "testing"

extensions:
  health_check:
  pprof:
  zpages:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]

We can see the event is received, but never sent to our exporter

[opentelemetry-lambda-extension] Received event: {
	"eventType": "INVOKE",
	"deadlineMs": 1614115570609,
	"requestId": "dcaba70b-72e5-4b02-8bef-f7355fecd324",
	"invokedFunctionArn": "arn:aws:lambda:us-west-2:****redacted****:function:extention-test",
	"tracing": {
		"type": "X-Amzn-Trace-Id",
		"value": "Root=1-****redacted****;Sampled=0"
	}
}

Support for dynamic collector config

Usecase/Feature/Problem Statement

Collector supports attribute processors and few exporters where the config requires to be dynamic based on the use case.

Current approach

Include the collector config as part of the lambda function and reference the file with env var.

Challenges in the current approach

  • This requires changing the lambda function
  • All config(ex., exporter with headers containing any auth token, referring to AWS secrets) can't be committed to SCM. Few has to reference env var or AWS secrets.

Proposal

I don't have a detailed proposal yet. Just wanted to check if this makes sense.
Provision to add dynamic config with some kind of templating to generate the config when collector is instantiated.

Registered auto-instrumentation does not seem to be working for nodejs lambda layer

I see that there is instrumentation here: https://github.com/open-telemetry/opentelemetry-lambda/blob/main/nodejs/packages/layer/src/wrapper.ts .

However, in my x-ray, I don't see the spans for what I'm expecting such as for graphql or dynamodb.

I'm using the lambda layer arn:aws:lambda:us-east-2:901920570463:layer:aws-otel-nodejs-ver-0-23-0:1.

I'm not show how to debug this or if this is actually working or not.

[Python] Tests that init OTel with otel-instrument script should not use TestBase setup method.

Description

As is common with all instrumentations upstream, AWS Lambda instrumentation testing involves using TestBase. TestBase has a default implementation of super().setUpClass() which initializes OTel Python with a TracerProvider.

This is a problem because we want the tests to test the otel-instrument command which ALSO sets the TracerProvider. But setting the TracerProvider a 2nd time is not allowed so the effects of otel-instrument are ignored. What's more, otel-instrument runs as a subshell as of #164 so we'll have to be more clever as to how we propagate its changes to the parent process while also making sure we do NOT prematurely init OTel's TracerProvider with the TestBase default methods.

One solution is to call the upstream opentelemetry-instrumentation auto initialization script ourselves in the test.

Proposal: Include Batch processor

Is your feature request related to a problem? Please describe.

Currently, no processors are included in the collector for Lambda. This makes it impossible to use Batch procesor. Missing it in the pipeline causes some issues, such as causing sending not-efficient batches by the exporter or not being able to limit maximum batch size

Describe the solution you'd like

Include Batch processor

Callback function returns a null response

Hi Team,
I instrumented a lambda function with AWS Distro Lambda Layer, after changes able to send traces to defined exporters in collector, however function returns response "null" instead of actual response. Need your assistance to solve the issue

Note: Null response coming, when i use the callback function inside handler
retuning a correct response, if i use return response instead of return callback(null, response) in following function code

Here is function code:
exports.handler = async (event, ctx, callback) => {
console.info(event);
// TODO implement
const response = {
statusCode: 200,
body: JSON.stringify('Hello from Lambda!'),
};
return callback(null, response);
};

Regards,
Venakt

Rename InProcessCollector

We don't need to call InProcessCollector an in-process collector because we are providing it as a library. Let's move the collector into a package lambdacollector and call it Collector.

The public API should something like:

collector, err := lambdacollector.NewCollector(f)
if err != nil {
   log.Fatal(err)
}

if err := collector.Start(); err != nil {
   log.Fatal(err)
}

if err := collector.Stop(); err != nil {
   log.Fatal(err)
}

Switch Python layer to use HTTP export

Now that opentelemetry-python has an HTTP exporter that doesn't use native libraries like gRPC, it would be a more appropriate exporter for the Lambda layer to avoid versioning issues. It would presumably also allow building the layer without Docker as the native components would be gone.

Collector make broken

PR #73 caused collector make process to fail. Currently failing with the following error due to Version and GitHash not being defined. Was able to back up one commit and build fine.

# github.com/open-telemetry/opentelemetry-lambda/collector/lambdacollector
lambdacollector/collector.go:67:14: undefined: Version
lambdacollector/collector.go:68:14: undefined: GitHash
Makefile:18: recipe for target 'build' failed
make: *** [build] Error 2

Node.js Layer using Depreated Exporter

If I'm reading things correctly, the Node.js exporter that's used in the Node.js layer

import { OTLPTraceExporter } from '@opentelemetry/exporter-otlp-proto';

is deprecated. See https://cloud-native.slack.com/archives/C01NL1GRPQR/p1637779230143700

In addition to being deprecated, it does not appear that the deprecated @opentelemetry/exporter-otlp-proto allows users to export using OTLP over GRPC. It only supports exporting over HTTP using @opentelemetry/exporter-otlp-http.

It would be great if this could be updated to use one of currently supported exporters and if data could be exported via either GRPC and HTTP.

(Not so great that I have time to take on the work, but great regardless ;))

Getting "the grpc server returns error Unknown error" in cloudwatch

I'm working with the open-telemetry rust otlp pipeline in a AWS lambda and I've started running across an error that appears to be preventing my traces from propagating to X-Ray:

OpenTelemetry trace error occurred. Exporter otlp encountered the following error(s): the grpc server returns error Unknown error

Below is my current collector.yaml configuration; is there something incorrect about my configuration setup that's causing this issue? Or is this a bug?

#collector.yaml in the root directory
#Set an environemnt variable 'OPENTELEMETRY_COLLECTOR_CONFIG_FILE' to '/var/task/<path/<to>/<file>'

receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  logging:
  awsxray:
  otlp:
    endpoint: 0.0.0.0:4317

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [logging, awsxray, otlp]
    metrics:
      receivers: [otlp]
      exporters: [logging]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.