GithubHelp home page GithubHelp logo

lmstudio-ai / lmstudio.js Goto Github PK

View Code? Open in Web Editor NEW
290.0 9.0 44.0 808 KB

LM Studio TypeScript SDK (pre-release public alpha)

Home Page: https://lmstudio.ai/docs/lmstudio-sdk/quick-start

License: Apache License 2.0

JavaScript 2.38% TypeScript 95.14% Shell 1.30% PowerShell 1.18%
llm lmstudio nodejs typescript

lmstudio.js's People

Contributors

lmstudio-windows avatar mattjcly avatar mrdjohnson avatar ryan-the-crayon avatar yagil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lmstudio.js's Issues

Received invalid creationParameter when calling client.llm.load(modelPath)

So far I could run lmstudio.js lmstudio/sdk command "await client.llm.load(modelPath);" without any issue. I recently updated LM Studio from 0.2.25 to version 0.2.26 and lmstudio.js lmstudio/sdk from 0.0.3 to 0.0.12. When calling js command "await client.llm.load(modelPath);" it throws this error:

W [LMStudioClient][LLM][ClientPort] Received communication warning from the server: Received invalid creationParameter for channel, endpointName = loadModel, creationParameter = {"path":"lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf","config":{"gpuOffload":1},"noHup":false}. Zod error:

- creationParameter.loadConfigStack: Required

This is usually caused by communication protocol incompatibility. Please make sure you are using the up-to-date versions of the SDK and LM Studio.

Note: This warning was received from the server and is printed on the client for convenience.

also adding a configuration parameter resulted into the same error:

client.llm.load(modelPath, {
            config: { gpuOffload: 1.0 }
        });

However other functions such as "client.system.listDownloadedModels()" still work fine. Only client.llm.load() has issues.

platform not supported

i keep getting this error upon loading model. whats the possible issue here and how to fix
image

Always got the messages as below when I start local http server

image

Hi there,

Above is the config info and errors.

Every time I start the local HTTP server the messages below occurred.

[2024-05-17 23:21:10.209] [INFO] [LM STUDIO SERVER] Verbose server logs are DISABLED
[2024-05-17 23:21:12.539] [INFO] [LM STUDIO SERVER] Stopping server..
[2024-05-17 23:21:12.540] [INFO] [LM STUDIO SERVER] Server stopped
[2024-05-17 23:21:12.541] [INFO] [LM STUDIO SERVER] Verbose server logs are DISABLED
[2024-05-17 23:21:14.856] [INFO] [LM STUDIO SERVER] Stopping server..
[2024-05-17 23:21:14.856] [INFO] [LM STUDIO SERVER] Server stopped
[2024-05-17 23:21:14.857] [INFO] [LM STUDIO SERVER] Verbose server logs are DISABLED

Does anyone have any suggestions?

Thank you.

Jarvis

Network Proxy

The software does not support the setting of network proxy, but the network proxy is required because it cannot be directly connected to the haggingface in mainland China

Zod errors on getModelInfo and loadModel endpoints

LM Studio version: 0.2.21
SDK version: 0.0.12

I have the LMS GUI open, the local server running, and a model loaded.

const model = await client.llm.get({});

Causes the following error:

W [LMStudioClient][LLM][ClientPort] Produced communication warning: Received invalid result for rpc, endpointName = getModelInfo, result = {"identifier":"bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q8_0.gguf","path":"bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q8_0.gguf"}. Zod error:

- result.sessionIdentifier: Required
- result.descriptor: Required

This is usually caused by communication protocol incompatibility. Please make sure you are using the up-to-date versions of the SDK and LM Studio.

If I try to load a model by name, I get a different error:

const model = await client.llm.load('bartowski/Starling-LM-7B-beta-GGUF');
W [LMStudioClient][LLM][ClientPort] Received communication warning from the server: Received invalid creationParameter for channel, endpointName = loadModel, creationParameter = {"path":"bartowski/Starling-LM-7B-beta-GGUF","config":{},"noHup":false}. Zod error:

- creationParameter.acceleration: Required

This is usually caused by communication protocol incompatibility. Please make sure you are using the up-to-date versions of the SDK and LM Studio.

Note: This warning was received from the server and is printed on the client for convenience.

embeddings are not generated using langchain js OpenAIEmbeddings

hi , i am following a tutorial here is the github link of same https://github.com/leonvanzyl/langchain-js/blob/lesson-4/retrieval-chain.js , every thing works till lesson 3 , in lesson 4 i am using this code to create embeddings but there is just no msg or error , the program is just stuck , may be i am doing something wrong!

here is the code which is not working , the last line is not triggered where i am logging

console.log("vectorstore :>> ", vectorstore)
import { ChatOpenAI } from "@langchain/openai"
import { createStuffDocumentsChain } from "langchain/chains/combine_documents"
import { ChatPromptTemplate } from "@langchain/core/prompts"
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio"
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"
import { OpenAIEmbeddings } from "@langchain/openai"
import { MemoryVectorStore } from "langchain/vectorstores/memory"
import { createRetrievalChain } from "langchain/chains/retrieval"

let modelToUse = "microsoft/Phi-3-mini-4k-instruct-gguf"
modelToUse =
  "lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf"

// Instantiate the model
const model = new ChatOpenAI({
  model: modelToUse,
  apiKey: "not-required",
  temperature: 0.7,
  configuration: {
    baseURL: "http://127.0.0.1:1234/v1",
    // gpuOffload: "max",
    // noHup: true,
  },
})

// Create Prompt Template from fromMessages
const prompt = ChatPromptTemplate.fromTemplate(
  `Answer the user's question from the following context:
  {context}
  Question: {input}`,
)

const chain = await createStuffDocumentsChain({ llm: model, prompt })

// Use Cheerio to scrape content from webpage and create documents
const loader = new CheerioWebBaseLoader(
  "https://js.langchain.com/docs/expression_language/",
)

const docs = await loader.load()

const embeddings = new OpenAIEmbeddings({
model: modelToUse,
  apiKey: "not-required",
  verbose: true,

  configuration: {
    baseURL: "http://127.0.0.1:1234/v1",
  },
})

// console.log("embeddings :>> ", embeddings)

// Create Vector Store
const vectorstore = await MemoryVectorStore.fromDocuments(docs, embeddings)
console.log("vectorstore :>> ", vectorstore)

found an issue that might me relevant to this langchain-ai/langchain#21318

npm install @lmstudio/sdk :

What version of node is required ?

I have a Macbook M3 Max Apple Silicon and get errors trying to run : npm install @lmstudio/sdk

Any advice on this ?

I use a Macbook Pro M3 max:

Here is a short section of the node errors:
npm error code 1
npm error path /Users/henrijohnson/Repos/lmstudio.js/node_modules/lmdb
npm error command failed
npm error command sh -c node-gyp-build-optional-packages
npm error make: Entering directory '/Users/henrijohnson/Repos/lmstudio.js/node_modules/lmdb/build'
npm error CXX(target) Release/obj.target/lmdb/src/lmdb-js.o
npm error make: Leaving directory '/Users/henrijohnson/Repos/lmstudio.js/node_modules/lmdb/build'
npm error /Users/henrijohnson/Repos/lmstudio.js/node_modules/node-gyp-build-optional-packages/index.js:77
npm error throw new Error('No native build was found for ' + target + '\n attempted loading from: ' + dir + ' and package:' +
npm error ^
npm error
npm error Error: No native build was found for platform=darwin arch=arm64 runtime=node abi=120 uv=1 armv=8 libc=glibc node=21.2.0
npm error attempted loading from: /Users/henrijohnson/Repos/lmstudio.js/node_modules/lmdb and package: @lmdb/lmdb-darwin-arm64
npm error
npm error at load.path (/Users/henrijohnson/Repos/lmstudio.js/node_modules/node-gyp-build-optional-packages/index.js:77:9)
npm error at load (/Users/henrijohnson/Repos/lmstudio.js/node_modules/node-gyp-build-optional-packages/index.js:29:25)
npm error at Object. (/Users/henrijohnson/Repos/lmstudio.js/node_modules/node-gyp-build-optional-packages/build-test.js:19:19)
npm error at Module._compile (node:internal/modules/cjs/loader:1376:14)
npm error at Module._extensions..js (node:internal/modules/cjs/loader:1435:10)
npm error at Module.load (node:internal/modules/cjs/loader:1207:32)
npm error at Module._load (node:internal/modules/cjs/loader:1023:12)
npm error at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:135:12)
npm error at node:internal/main/run_main_module:28:49
npm error
npm error Node.js v21.2.0
npm error
npm error The failure above indicates the primary issue with the native builds which are included for all major platforms. Will now attempt to build the package locally in case this can be resolved by re-compiling.
npm error gyp info it worked if it ends with ok
npm error gyp info using [email protected]
npm error gyp info using [email protected] | darwin | arm64
npm error gyp info find Python using Python version 3.12.3 found at "/opt/homebrew/opt/[email protected]/bin/python3.12"
npm error gyp info spawn /opt/homebrew/opt/[email protected]/bin/python3.12
npm error gyp info spawn args [
npm error gyp info spawn args '/Users/henrijohnson/Repos/lmstudio.js/node_modules/node-gyp/gyp/gyp_main.py',
npm error gyp info spawn args 'binding.gyp',
npm error gyp info spawn args '-f',
npm error gyp info spawn args 'make',
npm error gyp info spawn args '-I',
npm error gyp info spawn args '/Users/henrijohnson/Repos/lmstudio.js/node_modules/lmdb/build/config.gypi',
npm error gyp info spawn args '-I',
npm error gyp info spawn args '/Users/henrijohnson/Repos/lmstudio.js/node_modules/node-gyp/addon.gypi',
npm error gyp info spawn args '-I',
npm error gyp info spawn args '/Users/henrijohnson/Library/Caches/node-gyp/21.2.0/include/node/common.gypi',
npm error gyp info spawn args '-Dlibrary=shared_library',
npm error gyp info spawn args '-Dvisibility=default',
npm error gyp info spawn args '-Dnode_root_dir=/Users/henrijohnson/Library/Caches/node-gyp/21.2.0',
npm error gyp info spawn args '-Dnode_gyp_dir=/Users/henrijohnson/Repos/lmstudio.js/node_modules/node-gyp',
npm error gyp info spawn args '-Dnode_lib_file=/Users/henrijohnson/Library/Caches/node-gyp/21.2.0/<(target_arch)/node.lib',
npm error gyp info spawn args '-Dmodule_root_dir=/Users/henrijohnson/Repos/lmstudio.js/node_modules/lmdb',
npm error gyp info spawn args '-Dnode_engine=v8',
npm error gyp info spawn args '--depth=.',
npm error gyp info spawn args '--no-parallel',
npm error gyp info spawn args '--generator-output',
npm error gyp info spawn args 'build',
npm error gyp info spawn args '-Goutput_dir=.'
npm error gyp info spawn args ]
npm error gyp info spawn make
npm error gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
npm error In file included from ../src/lmdb-js.cpp:1:
npm error ../src/lmdb-js.h:4:10: fatal error: vector: No such file or directory
npm error 4 | #include
npm error | ^~~~~~~~
npm error compilation terminated.
npm error make: *** [lmdb.target.mk:161: Release/obj.target/lmdb/src/lmdb-js.o] Error 1
npm error gyp ERR! build error
npm error gyp ERR! stack Error: make failed with exit code: 2

Adding new models in the model list

Hi,
I have a finetuned model and it's gguf format. Can someone guide on how to add this model and access it using lmstudio windows version and lmstudio js?

How to switch/choose different quantization version from a model path

From document I see something:
// Matches any quantization const llama3 = await client.llm.get({ path: "lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF" });

Dose it mean to load all quantization version into memory? Or there is method to specify which version to actually load?

- creationParameter.loadConfigStack: Required lm studio - Version 0.2.26 (0.2.26)

W [LMStudioClient][LLM][ClientPort] Received communication warning from the server: Received invalid creationParameter for channel, endpointName = loadModel, creationParameter = {"path":"bartowski/gemma-2-9b-it-GGUF","identifier":"my-model","config":{"contextLength":1024,"gpuOffload":"max"},"noHup":true}. Zod error:

  • creationParameter.loadConfigStack: Required

Named export 'LMStudioClient' not found

import { LMStudioClient } from '@lmstudio/sdk';

When I import the SDK like this, I get the following error:

import { LMStudioClient } from '@lmstudio/sdk';
         ^^^^^^^^^^^^^^
SyntaxError: Named export 'LMStudioClient' not found. The requested module '@lmstudio/sdk' is a CommonJS module, which may not support all module.exports as named exports. CommonJS modules can always be imported via the default export, for example using:

import pkg from '@lmstudio/sdk';
const { LMStudioClient } = pkg;

    at ModuleJob._instantiate (node:internal/modules/esm/module_job:132:21)
    at async ModuleJob.run (node:internal/modules/esm/module_job:214:5)
    at async ModuleLoader.import (node:internal/modules/esm/loader:329:24)
    at async loadESM (node:internal/process/esm_loader:28:7)
    at async handleMainPromise (node:internal/modules/run_main:113:12)

Adding /index.mjs does work though:

import { LMStudioClient } from '@lmstudio/sdk/index.mjs';

Feature Request: Embedding model support

Currently neither lmstudio-js nor lms appear to support embedding models. They are listed with lms ls but have no option to load with lms load or the TypeScript library. I'd like to have embeddings support for lmstudio-js for better integration of embeddings into TypeScript apps. This should approximately entail the ability to load embedding models and query the API with some kind of embed function which takes a string list and returns a list of float lists (or more accurately, a kind of EmbeddingsPredictionResult with that as its content field - I don't think the endpoint supports streaming so no need for an OngoingPrediction).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.