GithubHelp home page GithubHelp logo

tryagi / langchain Goto Github PK

View Code? Open in Web Editor NEW
415.0 23.0 64.0 5 MB

C# implementation of LangChain. We try to be as close to the original as possible in terms of abstractions, but are open to new entities.

Home Page: https://tryagi.gitbook.io/langchain/

License: MIT License

C# 99.98% Batchfile 0.02%
ai csharp langchain llm langchain-csharp abstractions agents chain joi llms

langchain's People

Contributors

allcontributors[bot] avatar ceejeeb avatar curlyfro avatar danijerez avatar dependabot[bot] avatar eltociear avatar ericgreenmix avatar fiorelorenzo avatar github-actions[bot] avatar gunpal5 avatar havendv avatar hiptopjones avatar irooc avatar jekakmail avatar kharedev247 avatar khoroshevj avatar lyx52 avatar matt-regier avatar siegduch avatar sweep-ai[bot] avatar tesanti avatar vikhyat90 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain's Issues

Asp.net endpoints to mimik OpenAPI to allow use Web UIs

@TesAnti
Anti — Today at 4:49 PM
@HavenDV what do you think about mimiking openAI api? this will allow us to attach UI to our chain
You were mentioning LangServe
haven't looked at it yet though
HavenDV — Today at 5:03 PM
Didn't understand what you mean. Is this about implementing all the features that the OpenAI API has?
Anti — Today at 5:03 PM
actually only text completitionas ones.
this will allow us to server chains for web-UI's like this:
https://github.com/oobabooga/text-generation-webui
there is a lot of them
you can build chain that get's data trough RAG, makes google search, summarizes everythig and puts out to the user
but for user side it would be just chat and all complicated behaviour will be hidden
HavenDV — Today at 5:09 PM
yes, I have plans to do something similar to chat as an example. but not sure if the server should provide an API equivalent to OpenAI text endpoints and it's up to the client to decide. Typically this chain is implemented on the server side and only the final end points are provided. Because otherwise it opens up space for reverse engineering and using it for cracker purposes
Anti — Today at 5:21 PM
Typically this chain is implemented on the server side and only the final end points are provided
That is what i'm talking about. You implement a chain and wrap it into API which mimiking OpenAI. Then you can use any existing UI(designed for ollama, OpenAI, or anything else) to chat with your chain
HavenDV — Today at 5:58 PM
Yes, I understand you, in general this makes sense if the restrictions on the number of possible user requests per second/minute/hour/day are configured correctly
We need to create an issue about this
Anti — Today at 6:44 PM
i have seeing this as for single person use. But now when you mention this... We can make a queue of requests. This should be enough
For proper limits there should be user authorization of some sort
Also interesting thing would be not only to have API wrapper, but also whappers for bots like for Discord and Telegram

First examples on frontpage does not run

What would you like to be added:

Better running examples.

Why is this needed:

If someone is newbe using this library and you copy/paste this code and it does not run at the first try, I think this is a problem.

Anything else we need to know?

The code in the front page does not run on this line because PromptTemplate does not exists:
var prompt = new PromptTemplate(new PromptTemplateInput(template, new List<string>(1){"product"}));

Perhaps a very easy solution/clue is adding the neccesary usings to the example.

Thanks and congratulate by your work. It will be very useful for a lot of people.

Word Source support

What would you like to be added:

Why is this needed:

Anything else we need to know?

Together.AI provider

Project: LangChain.Providers.Together

  • Since Together supports the official OpenAI interface, implementation is very simple using the OpenAI provider. See link for Anyscale
  • You can use GPT-4 for initial implementation
  • Tests are encouraged but not required. It's better to do something minimal than nothing.

References:

Now usage looks like:

var provider = new OpenAiProvider(
      apiKey: Environment.GetEnvironmentVariable("TOGETHER_API_KEY") ??
      throw new InconclusiveException("TOGETHER_API_KEY is not set"),
      customEndpoint: "api.together.xyz");
var llm = new OpenAiChatModel(provider, id: "meta-llama/Llama-2-70b-chat-hf");
var embeddings = new OpenAiEmbeddingModel(provider, id: "togethercomputer/m2-bert-80M-2k-retrieval");

Explore ability to use with DataFlow

I'm just thinking about different use cases. I think over time we will also need to create blocks for DataFlow - https://learn.microsoft.com/en-us/dotnet/standard/parallel-programming/dataflow-task-parallel-library or at least just keep this in mind use case

Anti — 01/21/2024 5:49 PM
About TPL. Last time i heard about it it was something like 2010... Didn't know that it is still in use. Should be quite easy to wrap our chain links into ActionBlocks
Also i'm thinking of some sort of separate repository for prompts. Every model is requiring a little bit different prompts. So having a repository with premade templates and tags seems to be a good idea.

HavenDV — 01/21/2024 6:31 PM
We use TPL every day. This is an async/await Task and the generated compiler code hidden behind it
But DataFlow is the official library for more complex connections based on this
Anti — 01/21/2024 6:33 PM
thought TPL = DataFlow. I was using it 10 years ago, so don't remember terms. As i rememebr, it's bunch of blocks with LinkTo methods
HavenDV — 01/21/2024 6:34 PM
Yes, and communications between blocks can occur in parallel without the user having to explicitly manage this. We just have to keep it in mind so we don't reinvent it.

[OpenAI] Support for JSON Response Format

Discussed in #80

Originally posted by samitugal November 27, 2023

As you may know, with the latest update, OpenAI added the response format feature to the ChatGPT APIs. Would you consider adding such a feature? The most challenging part for me is parsing the outputs as JSON.

image

Return OpenAI SDK source generator for easy definition of tools

Describe the bug

Return OpenAI SDK source generator for easy definition of tools

Steps to reproduce the bug

Return OpenAI SDK source generator for easy definition of tools

Expected behavior

No response

Screenshots

No response

NuGet package version

No response

Additional context

No response

Change OpenAI default mode to streaming

Anti — 01/22/2024 1:05 PM
@HavenDV Is it possible to add TokenGenerated event into OpenAI provider? Maybe we should make those events to be part of the common interface?
HavenDV — 01/23/2024 2:48 AM
It's possible, but it will only work in streaming mode - https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream
And I'm not sure if we should use this as the default
Anti — 01/23/2024 11:45 AM
streaming mode is not slower than regular one. and it looks better when you see the output of LLM right away instead of waiting for response for 10 seconds
if you want you can, actually, check if there is any subscribers to the event and pick the mode based on that
oh, and also PromptSent event. This also helps quite a lot with debug
HavenDV — 01/23/2024 10:50 PM
Yes, I think you are right and we need to do this as default behavior

Which endpoint should I fill in to use Azure Open AI?

The code is as follows

const string apiKey = "xxxxx";
const string endpoint = "https://{resources_name}.openai.azure.com/openai/deployments/{deployment_name}/chat/completions?api-version=2023-07-01-preview";
using var httpClient = new HttpClient();
var model = new Gpt35TurboModel(apiKey, endpoint, httpClient);
var response = await model.GenerateAsync("你好");
Console.WriteLine(response);

image

Question: What's the purpose of "LangChain.Core\Chains\StackableChains"

Hi! I'm learing the Rep, and confused about "LangChain.Core\Chains\StackableChains".

StackableChains is an extension of BaseStackableChain, and normally other chains extended BaseChain. I can't find the difference of BaseStackableChain and BaseChain.

My question is:

  1. What's the difference of BaseStackableChain and BaseChain? What's the meaning of "Stackable"?
  2. When will I use StackableChains?

Thanks! Great job!

Move tests to NUnit

I don’t like the approach for tests using the [Ignore] attribute, it prevents from running all tests locally at once
Therefore, I'm thinking of transferring all tests to NUnit and using the [Explicit] attribute. This will ignore some tests even though all tests are running, unless the user manually runs it.

Remove LLMChain and refactor

Peter James — Yesterday at 10:40 AM
Something else interesting / confusing is that there are two LLMChain classes in relatively close proximity (LangChain.Chains.HelperChains vs. LangChain.Chains.LLM). They both end up implementing IChain, but it seems one of them is for the pipeline syntax? Also, the two classes use different capitalization LLMChain vs LlmChain.

Google Gemini Support

What would you like to be added:

Support for Recently launched Google Gemini
https://ai.google.dev/

Why is this needed:

Gemini is the latest AI Model from Google. It is a direct competitor of OpenAI's GPT

Anything else we need to know?

Can LangChain in C# work with a a local LLM model

What would you like to be added:

LangChain in C# work with a local LLM model

Why is this needed:

I had a local LLM model service, I want to use it instead of the OpenAI's service.I think this is a common function.

Anything else we need to know?

It is very easy in Python.
`import os
import openai
openai.api_base = "http://localhost:8080/v1"
openai.api_key = "sx-xxx"
OPENAI_API_KEY = "sx-xxx"
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY

completion = openai.ChatCompletion.create(
model="lunademo",
messages=[
{"role": "system", "content": "You are a math teacher"},
{"role": "user", "content": "What is Pi"}
]
)`

MarkdownHeaderTextSplitter

  • You can omit the base class implementation. Just one file where this works is enough, we'll highlight the interfaces a bit later.
  • You can use GPT-4 for initial implementation
  • Tests are encouraged but not required. It's better to do something minimal than nothing.

References:

Providers: HuggingFace

Hi,

I'm new to Langchain and LLM.

I've recently deployed an LLM model using the Hugging Face text-generation-inference library on my local machine.

I've successfully accessed the model using Python by following the instructions provided at https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference.

However, I would prefer to use WPF with C#. I'm delighted to discover that there is support for Langchain with .NET.

Is there a way for me to access Hugging Face text-generation-inference in Langchain using C# as well?

Since I've deployed the model on a local server machine and I want to access the model from another client machine via an intranet connection, I'm open to alternative solutions aside from using text-generation-inference in C# on the client machine.

Any advice would be greatly appreciated.

Incorrect result for MarkdownHeaderTextSplitter

Describe the bug

I just tried using the MarkdownHeaderTextSplitter, but I believe the end result is incorrect.

Steps to reproduce the bug

Try splitting the text # Foo\n\n ## Bar\n\nHi this is Jim \nHi this is Joe\n\n ## Baz\n\n Hi this is Molly.

The result is:

["Foo\n Bar\nHi this is Jim\nHi this is Joe\n Baz\nHi this is Molly"]

Which is incorrect according to LangChain's implementation:
https://python.langchain.com/docs/modules/data_connection/document_transformers/markdown_header_metadata

Expected behavior

It should be:

[
    "Foo\nBar\nHi this is Jim  \nHi this is Joe",
    "Foo\nBaz\nHi this is Molly"
]

Screenshots

No response

NuGet package version

0.12.3-dev.110

Additional context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.