GithubHelp home page GithubHelp logo

traceloop / openllmetry-js Goto Github PK

View Code? Open in Web Editor NEW
228.0 228.0 15.0 4.71 MB

Sister project to OpenLLMetry, but in Typescript. Open-source observability for your LLM application, based on OpenTelemetry

Home Page: https://www.traceloop.com/openllmetry

License: Apache License 2.0

TypeScript 99.71% Shell 0.03% JavaScript 0.26%
datascience generative-ai javascript llmops metrics ml model-monitoring monitoring nextjs observability open-source opentelemetry opentelemetry-javascript typescript

openllmetry-js's People

Contributors

5war00p avatar dependabot[bot] avatar galkleinman avatar kartik1397 avatar kartikay-bagla avatar nirga avatar tomer-friedman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openllmetry-js's Issues

OpenAI / Azure OpenAI tests

Write tests to cover our OpenAI / Azure OpenAI instrumentations, similar to other instrumentations in this repo.

Re-enable Pinecone tests

Following #56, we switched to mocked tests based on pre-recording of HTTP calls.
While this method worked well for OpenAI and other HTTP calls, I had some issues with the way our Pinecone tests are built.
This needs to be fixed and re-enabled.

๐Ÿš€ Feature: Chroma Instrumentation

Which component is this feature for?

All Packages

๐Ÿ”– Feature description

Instrument calls to Chroma, including adding attributes, similar to our Chroma on Python instrumentation. The instrumentation should support all types of calls - sync, async, etc.

๐ŸŽค Why is this feature needed?

Completeness of OpenLLMetry

โœŒ๏ธ How do you aim to achieve this?

Similar to other instrumentations, we have in this repo.

๐Ÿ”„๏ธ Additional Information

  • Make sure to initialize this in Traceloop SDK.
  • Add proper tests.
  • Add a sample in the sample app.
  • Add a screenshot from Traceloop showing the instrumentation in action.

๐Ÿ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

Bug in OpenAI streaming instrumentation

Steps reproduce

Run following file

import * as traceloop from "@traceloop/node-server-sdk";
import OpenAI from "openai";

traceloop.initialize({
  appName: "sample_openai",
  apiKey: process.env.TRACELOOP_API_KEY,
  disableBatch: true,
});
const openai = new OpenAI();

class SampleOpenAI {
  @traceloop.workflow("sample_completion")
  async completion() {
    const completion = await openai.completions.create({
      prompt: "Tell me a joke about TypeScript",
      model: "gpt-3.5-turbo-instruct",
      stream: true
    });

    return completion;
  }
}

traceloop.withAssociationProperties(
  { user_id: "12345", chat_id: "789" },
  async () => {
    const sampleOpenAI = new SampleOpenAI();

    const completion = await sampleOpenAI.completion();
    console.log(completion);

    await traceloop.reportScore({ chat_id: "789" }, 1);
  },
);

Actual

Traceloop exporting traces to https://api.traceloop.com
/Users/k/personal/openllmetry-js/packages/instrumentation-openai/dist/src/instrumentation.js:211
                result.choices.forEach((choice, index) => {
                               ^

TypeError: Cannot read properties of undefined (reading 'forEach')
    at OpenAIInstrumentation._endSpan (/Users/k/personal/openllmetry-js/packages/instrumentation-openai/dist/src/instrumentation.js:211:32)
    at /Users/k/personal/openllmetry-js/packages/instrumentation-openai/dist/src/instrumentation.js:171:30
    at new Promise (<anonymous>)
    at /Users/k/personal/openllmetry-js/packages/instrumentation-openai/dist/src/instrumentation.js:149:20
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.completion (/Users/k/personal/openllmetry-js/packages/sample-app/dist/src/sample_streaming.js:19:28)
    at async /Users/k/personal/openllmetry-js/packages/traceloop-sdk/dist/src/lib/tracing/decorators.js:22:25
    at async descriptor.value (/Users/k/personal/openllmetry-js/packages/traceloop-sdk/dist/src/lib/tracing/decorators.js:60:24)
    at async /Users/k/personal/openllmetry-js/packages/sample-app/dist/src/sample_streaming.js:32:24

Node.js v18.19.0

Expected

Program should not crash and should send correct spans to traceloop.

๐Ÿš€ Feature: LlamaIndex Instrumentation

Which component is this feature for?

All Packages

๐Ÿ”– Feature description

Instrument LlamaIndex apps, including adding attributes, similarly to our instrumentation in Python. The instrumentation should support all types of calls - streaming, non streaming, async, etc.

๐ŸŽค Why is this feature needed ?

Completness of OpenLLMetry-JS

โœŒ๏ธ How do you aim to achieve this?

Similarily to other instrumentations we have in this repo.

๐Ÿ”„๏ธ Additional Information

This should instrument all typescript-equivalent APIs that we instrument in Python.

๐Ÿ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

Re-enable VertexAI tests

Following #56, we switched to mocked tests based on pre-recording of HTTP calls.
While this method worked well for OpenAI and other HTTP calls, this doesn't work for GRPC which vertex / google is using.

We need to figure out how to mock out the requests there.

API in SDK for reporting prompts and completions

Right now, the only way to create spans of prompts and completions is to use one of the ready-made instrumentations.

We should have an easy to use API in the SDK to manually create a span with the appropriate attributes for cases where the current instrumentations are not enough. Similar to the APIs we currently have on Go and Ruby.

๐Ÿš€ Feature: Pinecone Instrumentation

Which component is this feature for?

All Packages

๐Ÿ”– Feature description

Instrument calls to Pinecone, including adding attributes, similarly to our Pinecone on Python instrumentation. The instrumentation should support all types of calls - streaming, non streaming, async, etc.

๐ŸŽค Why is this feature needed?

Completeness of OpenLLMetry

โœŒ๏ธ How do you aim to achieve this?

Similar to other instrumentations, we have in this repo.

๐Ÿ”„๏ธ Additional Information

  • Make sure to initialize this in Traceloop SDK.
  • Add proper tests.
  • Add a sample in the sample app.
  • Add a screenshot from Traceloop showing the instrumentation in action.

๐Ÿ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

๐Ÿš€ Feature: re-write LlamaIndex instrumentation to use LlamaIndex `CallbackManager`

Which component is this feature for?

LlamaIndex Instrumentation

๐Ÿ”– Feature description

Right now, we monkey-patch classes and methods in LlamaIndex which requires endless work and constant maintenance. LlamaIndex has a system for callbacks that can potentially be used to create/end spans without being too coupled with with the framework's inner structure.

๐ŸŽค Why is this feature needed ?

Support LlamaIndex entirely and be future-proof to internal API changes

โœŒ๏ธ How do you aim to achieve this?

Look into LlamaIndex callback_manager and how other frameworks are using it.

๐Ÿ”„๏ธ Additional Information

No response

๐Ÿ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

๐Ÿš€ Feature: re-write Langchain instrumentation to use Langchain Callbacks

Which component is this feature for?

Langchain Instrumentation

๐Ÿ”– Feature description

Right now, we monkey-patch classes and methods in LlamaIndex which requires endless work and constant maintenance. Langchain has a system for callbacks that can potentially be used to create/end spans without being too coupled with with the framework's inner structure.

๐ŸŽค Why is this feature needed ?

Support Langchain entirely and be future-proof to internal API changes

โœŒ๏ธ How do you aim to achieve this?

Look into Langchain callbacks and how other frameworks are using it.

๐Ÿ”„๏ธ Additional Information

No response

๐Ÿ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

Add OpenTelemetry Auto-Instrumentation for Langchain Library

Description

Add a new OpenTelemetry auto-instrumentation for Langchain Library. This instrumentation will greatly improve the observability and tracing capabilities of the Langchain based applications.

Details

  • Instrumentation Name: instrumentation-langchain
  • Description: This instrumentation will provide automatic tracing and observability for the Langchain Library.
  • Language: TypeScript

Motivation

Adding this instrumentation will allow seamless integration of OpenTelemetry with Langchain enabling better visibility into chains which will be represented as workflows/traces in tracing data and Traceloop dashoard.

High Level Instructions

  1. Add a new package in this monorepo which should called instrumentation-langchain, with the same structure/boilerplate of instrumentation-openai.
  2. Implement the auto instrumentation such that (quite similar to the instrumentation logic in the python sdk):
  • Every chain in Langchain will produce a span with the name <chain class name>.task
  • Traceloop Span Kind attribute from the semantic-convetions should be set to task and the span name is <chain class name>.task.
  • Propagate also the Workflow Name and Correlation Ids as well.
  • Produce workflow span For SequentialChain & RetrievalQA chains.
  1. Integrate the above implemented instrumentation within traceloop-sdk
  2. Add tests

Checklist

Please make sure the following criteria are met:

  • I have searched the existing issues and confirmed that this request is not a duplicate.
  • I have provided a clear and concise description of the requested instrumentation.
  • I have explained the motivation and benefits of adding this instrumentation.
  • I have included any additional information or resources that may be relevant.

Thank you!

๐Ÿš€ Feature: Bedrock Instrumentation

Which component is this feature for?

All Packages

๐Ÿ”– Feature description

Instrument calls to AWS's [Bedrock(https://www.npmjs.com/package/@aws-sdk/client-bedrock-runtime), including adding attributes for input parameters, model, etc. - similarly to our OpenAI instrumentation. The instrumentation should support all types of calls - streaming, non streaming, async, etc.

Instrumentation should support all models supported in our Python instrumentation.

This should specifically work with Google's new Gemini model.

๐ŸŽค Why is this feature needed ?

Completeness of OpenLLMetry

โœŒ๏ธ How do you aim to achieve this?

Similar to other instrumentations we have in this repo.

๐Ÿ”„๏ธ Additional Information

  • Make sure to initialize this in Traceloop SDK.
  • Add proper tests.
  • Add a sample in the sample app.
  • Upload a screenshot from Traceloop with how your instrumentation works.

๐Ÿ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

๐Ÿš€ Feature: VertexAI Instrumentation

Which component is this feature for?

All Packages

๐Ÿ”– Feature description

Instrument calls to Google's Vertex AI, including adding attributes for input parameters, model, etc. - similarly to our OpenAI instrumentation. The instrumentation should support all types of calls - streaming, non streaming, async, etc.

Instrumentation should support all models supported in our Python instrumentation including the new Gemini AI model.

This should specifically work with Google's new Gemini model.

๐ŸŽค Why is this feature needed ?

Completeness of OpenLLMetry

โœŒ๏ธ How do you aim to achieve this?

Similar to other instrumentations we have in this repo.

๐Ÿ”„๏ธ Additional Information

  • Make sure to initialize this in Traceloop SDK.
  • Add proper tests.
  • Add a sample in the sample app.
  • Upload a screenshot from Traceloop with how your instrumentation works.

๐Ÿ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

๐Ÿš€ Feature: Cohere Instrumentation

Which component is this feature for?

All Packages

๐Ÿ”– Feature description

Instrument calls to Cohere, including adding attributes for input parameters, model, etc. - similarly to our OpenAI instrumentation. The instrumentation should support all types of calls - streaming, non streaming, async, etc.

Instrumentation should support all models supported in our Python instrumentation.

This should specifically work with Google's new Gemini model.

๐ŸŽค Why is this feature needed ?

Completeness of OpenLLMetry

โœŒ๏ธ How do you aim to achieve this?

Similar to other instrumentations we have in this repo.

๐Ÿ”„๏ธ Additional Information

  • Make sure to initialize this in Traceloop SDK.
  • Add proper tests.
  • Add a sample in the sample app.
  • Upload a screenshot from Traceloop with how your instrumentation works.

๐Ÿ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.