GithubHelp home page GithubHelp logo

logancyang / obsidian-copilot Goto Github PK

View Code? Open in Web Editor NEW
2.0K 20.0 125.0 32.75 MB

A ChatGPT Copilot in Obsidian

Home Page: https://www.obsidiancopilot.com/

License: GNU Affero General Public License v3.0

JavaScript 1.85% TypeScript 95.96% CSS 2.18%
chatgpt obsidian-plugin openai-api

obsidian-copilot's Introduction

πŸ” Copilot for Obsidian

GitHub release (latest SemVer) Obsidian Downloads

Copilot for Obsidian is a free and open-source ChatGPT interface right inside Obsidian. It has a minimalistic design and is straightforward to use.

  • πŸ’¬ ChatGPT UI in Obsidian.
  • πŸ› οΈ Prompt AI with your writing using Copilot commands to get quick results.
  • πŸš€ Turbocharge your Second Brain with AI.
  • 🧠 Talk to your past notes for insights.

My goal is to make this AI assistant local-first and privacy-focused. It has a local vector store and can work with local models for chat and QA completely offline! More features are under construction. Stay tuned!

UI

If you enjoy Copilot for Obsidian, please consider sponsoring this project, or donate by clicking the button below. It will help me keep this project going to build toward a privacy-focused AI experience. Thank you!

Buy Me A Coffee

πŸŽ‰ HIGHLY ANTICIPATED v2.5.0: Vault QA (BETA) mode (with local embedding support)! Claude 3! πŸŽ‰πŸŽ‰πŸŽ‰

The highly anticipated biggest update of all is here!

The brand new Vault QA (BETA) mode allows you to chat with your whole vault, powered by a local vector store. Ask questions and get answers with cited sources!

What's more, with Ollama local embeddings and local chat models, this mode works completely offline! This is a huge step toward truly private and local AI assistance inside Obsidian!

Since Claude 3 models are announced today (3/4/2024), I managed to add them in this release too. Go to Anthropic's site to get your API key, you can now find it in the settings.

(Huge shoutout to @AntoineDao for working with me on Vault QA mode!)

FREE Models

OpenRouter.ai hosts some of the best open-source models at the moment, such as MistralAI's new models, check out their websites for all the good stuff they have!

LM Studio and Ollama are the 2 best choices for running local models on your own machine. Please check out the super simple setup guide here. Don't forget to flex your creativity in custom prompts using local models!

πŸ› οΈ Features

  • Chat with ChatGPT right inside Obsidian in the Copilot Chat window.
  • No repetitive login. Use your own API key (stored locally).
  • No monthly fee. Pay only for what you use.
  • Model selection of OpenAI, Azure, Google, Claude 3, OpenRouter and local models powered by LM Studio and Ollama.
  • No need to buy ChatGPT Plus to use GPT-4.
  • No usage cap for GPT-4 like ChatGPT Plus.
  • One-click copying any message as markdown.
  • One-click saving the entire conversation as a note.
  • Use a super long note as context, and start a discussion around it by switching to "Long Note QA" in the Mode Selection menu.
  • Chat with your whole vault by selecting "Vault QA" mode. Ask questions and get cited responses!
  • All QA modes are powered by retrieval augmentation with a local vector store. No sending your data to a cloud-based vector search service!
  • Easy commands to simplify, emojify, summarize, translate, change tone, fix grammar, rewrite into a tweet/thread, count tokens and more.
  • Set your own parameters like LLM temperature, max tokens, conversation context based on your need (pls be mindful of the API cost).
  • User custom prompt! You can add, apply, edit, delete your custom prompts, persisted in your local Obsidian environment! Be creative with your own prompt templates, sky is the limit!
  • Local model support for offline chat using LM Studio and Ollama.

🎬 Demos

πŸ€— New to Copilot? Quick Guide for Beginners:

  • Chat with ChatGPT, copy messages to note, save entire conversation as a note
  • QA around your past note
  • Fix grammar and spelling, Summarize, Simplify, Emojify, Remove URLs
  • Generate glossary, table of contents
  • Translate to a language of your choosing
  • You can find all Copilot commands in your command palette

To use Copilot, you need API keys from one of the LLM providers such as OpenAI, Azure OpenAI, Gemini, OpenRouter (Free!). You can also use it offline with LM Studio or Ollama!

Once you put your valid API key in the Copilot setting, don't forget to click Save and Reload. If you are a new user and have trouble setting it up, please open an issue and describe it in detail.

πŸ’¬ User Custom Prompt: Create as Many Copilot Commands as You Like!

You can add, apply, edit and delete your own custom Copilot commands, all persisted in your local Obsidian environment! Check out this demo video below!

🧠 Advanced Custom Prompt! Unleash your creativity and fully leverage the long context window!

This video shows how Advanced Custom Prompt works. This form of templating enables a lot more possibilities with long context window models. If you have your own creative cool use cases, don't hesitate to share them in the discussion or in the youtube comment section!

πŸ”§ Copilot Settings

The settings page lets you set your own temperature, max tokens, conversation context based on your need.

New models will be added as I get access.

You can also use your own system prompt, choose between different embedding providers such as OpenAI, CohereAI (their trial API is free and quite stable!) and Huggingface Inference API (free but sometimes times out).

βš™οΈ Installation

Copilot for Obsidian is now available in Obsidian Community Plugin!

  • Open Community Plugins settings page, click on the Browse button.
  • Search for "Copilot" in the search bar and find the plugin with this exact name.
  • Click on the Install button.
  • Once the installation is complete, enable the Copilot plugin by toggling on its switch in the Community Plugins settings page.

Now you can see the chat icon in your leftside ribbon, clicking on it will open the chat panel on the right! Don't forget to check out the Copilot commands available in the commands palette!

⛓️ Manual Installation

  • Go to the latest release
  • Download main.js, manifest.json, styles.css and put them under .obsidian/plugins/obsidian-copilot/ in your vault
  • Open your Obsidian settings > Community plugins, and turn on Copilot.

πŸ”” Note

  • The chat history is not saved by default. Please use "Save as Note" to save it. The note will have a title Chat-Year_Month_Day-Hour_Minute_Second, you can change its name as needed.
  • "New Chat" clears all previous chat history. Again, please use "Save as Note" if you would like to save the chat.
  • "Use Long Note as Context" creates a local vector index for the active long note so that you can chat with note longer than the model's context window! To start the QA, please switch from "Chat" to "QA" in the Mode Selection dropdown.
  • You can set a very long context in the setting "Conversation turns in context" if needed.

πŸ“£ Again, please always be mindful of the API cost if you use GPT-4 with a long context!

πŸ€” FAQ (please read before submitting an issue)

"You do not have access to this model"
  • You need to have access to some of the models like GPT-4 or Azure ones to use them. If you don't, sign up on their waitlist!
  • A common misunderstanding I see is that some think they have access to GPT-4 API when they get ChatGPT Plus subscription. It was not always true. You need to have access to GPT-4 API to use the GPT-4 model in this plugin. Please check if you can successfully use your model in the OpenAI playground first https://platform.openai.com/playground?mode=chat. If not, you can apply for GPT-4 API access here https://openai.com/waitlist/gpt-4-api. Once you have access to the API, you can use GPT-4 with this plugin without the ChatGPT Plus subscription!
  • Reference issue: #3 (comment)
It's not using my note as context
  • Please don't forget to switch to "QA" in the Mode Selection dropdown in order to start the QA. Copilot does not have your note as context in "Chat" mode. Settings
  • In fact, you don't have to click the button on the right before starting the QA. Switching to QA mode in the dropdown directly is enough for Copilot to read the note as context. The button on the right is only for when you'd like to manually rebuild the index for the active note, like, when you'd like to switch context to another note, or you think the current index is corrupted because you switched the embedding provider, etc.
  • Reference issue: #51
Unresponsive QA when using Huggingface as the Embedding Provider
  • Huggingface Inference API is free to use. It can give errors such as 503 or 504 frequently at times because their server has issues. If it's an issue for you, please consider using OpenAI or CohereAI as the embedding provider. Just keep in mind that OpenAI costs more, especially with very long notes as context.
"insufficient_quota"
  • It might be because you haven't set up payment for your OpenAI account, or you exceeded your max monthly limit. OpenAI has a cap on how much you can use their API, usually $120 for individual users.
  • Reference issue: #11
"context_length_exceeded"
  • GPT-3.5 has a 4096 context token limit, GPT-4 has 8K (there is a 32K one available to the public soon per OpenAI). So if you set a big token limit in your Copilot setting, you could get this error. Note that the prompts behind the scenes for Copilot commands can also take up tokens, so please limit your message length and max tokens to avoid this error. (For QA with Unlimited Context, use the "QA" mode in the dropdown! Requires Copilot v2.1.0.)
  • Reference issue: #1 (comment)
Azure issue
  • It's a bit tricky to get all Azure credentials right in the first try. My suggestion is to use curl to test in your terminal first, make sure it gets response back, and then set the correct params in Copilot settings. Example:
    curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=VERSION\
      -H "Content-Type: application/json" \
      -H "api-key: YOUR_API_KEY" \
      -d "{
      \"prompt\": \"Once upon a time\",
      \"max_tokens\": 5
    }"
    
  • Reference issue: #98

When opening an issue, please include relevant console logs. You can go to Copilot's settings and turn on "Debug mode" at the bottom for more console messages!

πŸ“ Planned features (feedback welcome)

  • New modes
    • Chat mode (originally Conversation mode): You can now provide multiple notes at once as context in conversations, for LLMs with an extended context window.
    • QA mode: You can index any folder and perform question and answer sessions using a local search index and Retrieval-Augmented Generation (RAG) system.
  • Support embedded PDFs as context
  • Interact with a powerful AI agent that knows your vault who can search, filter and use your notes as context to work with. Explore, brainstorm and research like never before!

πŸ™ Thank You

Did you know that even the timer on Alexa needs internet access? In this era of corporate-dominated internet, I still believe there's room for powerful tech that's focused on privacy. A great local AI agent in Obsidian is the ultimate form of this plugin. If you share my vision, please consider sponsoring this project or buying me coffees!

Buy Me A Coffee

Please also help spread the word by sharing about the Copilot for Obsidian Plugin on Twitter, Reddit, or any other social media platform you use.

You can find me on Twitter/X @logancyang.

obsidian-copilot's People

Contributors

hdykokd avatar lisandra-dev avatar logancyang avatar petery789 avatar seardnaschmid avatar sokole1 avatar welding-torch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

obsidian-copilot's Issues

Stop Generating button doesn't work

Running Obsidian 1.3.5, plugin version 2.2.0, using GPT-4 model.

Steps to reproduce:

  • Open Copilot sidebar
  • Ask anything
  • Click Stop Generating

Expected Result:

  • The message stream stops

Actual result:

  • The message stream keeps generating

Video:

Enregistrement.de.l.ecran.2023-06-02.a.18.22.30.mov

Support longer notes for "note as context"

Currently Copilot just wraps the OpenAI API call so it is subject to the token limit. Will work on a new feature to go around the token limit soon by using LangChain. Please feel free to share use cases and ideas about other features that can be enabled with LangChain.

LangChain error

Dear Development Team,

Thank you for creating this plugin. I have followed the user guide closely, but unfortunately, I am unable to utilize it successfully. Whenever I attempt to use it, an error message is displayed: "LangChain error: AxiosError: Request failed with status code 429 and body {"error": {"message": "You exceeded your current quota, please check your plan and billing details.", "type": "insufficient_quota", "param": null, "code": null}}".

I would like to know whether this error indicates that I need to upgrade to the ChatGPT Plus subscription in order to use the plugin effectively, or if it is simply not compatible with Obsidian. What can I should do to fix it?

I am eagerly awaiting your response. Thank you for your attention.

Best regards,
Mengelei

Support for Longer documents

Hey @logancyang love the plugin

QA is working great for short notes however long notes are returning "not enough context" .

image

Is there a way around this?

Regardless if it's the 16k model vs GPT-4 it seems the context length (26k characters, ~6300 tokens) is the same. Same if I increase the token count (from 1000 to 8000 which is the current max)

I've attached a long file which hopefully helps in investigation
384 Matthew McConaughey Freedom Truth Family Hardship and Love.md

LangChain Error: model_not_found

Hey, I get this error all the time. I have a subscription with ChatGPT, API Code is copied to Obsidian. Any ideas what the problem could be?

Thanks,
Mike

Feature: Text to speech

It would be useful to have chat output come out as text to speech, with a number of text to speech options including default OS TTS and integration with ElevenLabs and other text to speech LMs.

Being able to highlight text and have it read out loud would also be super helpful, as well as being able to save recordings and embed them in documents.

Struggling to get set up - open api 429 error

Hello,

Apologies that this may well be a 'me setting it up wrong' thing

But I cannot get the plugin to work

I have downloaded the files and moved them into the directory specified - I also tried using Obsidian BRAT when it didnt work

I have logged into chatGPT online and made an API key and pasted it in the settings for copilot and have it enabled etc

when I open it and enter a chat I get an error popup saying open api error: null pls see console etc

the console for it is:

Error in streamSSE: {
    "error": {
        "message": "You exceeded your current quota, please check your plan and billing details.",
        "type": "insufficient_quota",
        "param": null,
        "code": null
    }
}

getAIResponse @ plugin:copilot:37823

the error is quite clear, but I can log in and use chat gpt just fine, i am on a normal free plan and don't really use it much yet, so shouldn't be beyond whatever their free amount is

Is there some setting somewhere I am failing to see/change?

chat with current note

This functionality needs to be fixed. I keep getting "I'm sorry, I cannot summarize the note without further information. Please provide me with more details about the note you are referring to."

it's not using my note as context

First, I clicked on "Use active note as context"
I'm having this issue, it's not using my note as context:

Reading [[My note]]...
Please switch to "QA: Active Note" to ask questions about it.

or

I have Read [[Adam]].
Please switch to "QA: Active Note" to ask questions about it.

2023-06-01_14-57

Edit mode for custom prompts

Are you planning an edit mode for custom prompts, so it's possible to edit a portion of the prompt without the need to delete and re-add, or is there already one and I'm missing it?

Q&Aζ¨‘εΌοΌŒε»Ίη«‹η΄’εΌ•εŽε―Ήθ―δΈ€η›΄ζη€ΊLangchain Content_length_exceeed

Obsidian Developer Console
plugin:quickadd:18499 Loading QuickAdd
plugin:quickadd:16735 QuickAdd: (LOG) No migrations to run.
plugin:oz-image-plugin:188 Image in Editor Plugin is loaded
plugin:obsidian-pandoc:10084 Loading Pandoc plugin
plugin:dataview:19532 Dataview: version 0.5.56 (requires obsidian 0.13.11)
plugin:table-editor-obsidian:23712 loading markdown-table-editor plugin
plugin:shortcuts-extender:83 e
plugin:obsidian-tasks-plugin:224 loading plugin "tasks"
plugin:search-on-internet:580 loading search-on-internet
plugin:obsidian-day-planner:7214 Loading Day Planner plugin
plugin:obsidian-outliner:3294 Loading obsidian-outliner
plugin:recent-files-obsidian:223 Recent Files: Loading plugin v1.3.6
plugin:recent-files-obsidian:261 Recent Files: maxLength is not set, using default (50)
plugin:hotkeysplus-obsidian:43 Loading Hotkeys++ plugin
plugin:obsidian-local-rest-api:47287 REST API listening on https://127.0.0.1:27124/
plugin:obsidian-local-rest-api:47295 REST API listening on http://127.0.0.1:27123/
plugin:media-extended:28 loading media-extended
plugin:obsidian42-brat:23 loading Obsidian42 - BRAT
plugin:homepage:2 Homepage: Home (method: Replace all open notes, view: Default view, kind: File)
plugin:vantage-obsidian:81 Loading the Vantage plugin.
plugin:text-snippets-obsidian:39 Loading snippets plugin
plugin:obsidian-weread-plugin:2 load weread plugin
plugin:obsidian-weread-plugin:2 --------init cookie------ Array(9)
plugin:obsidian-weread-plugin:2 [weread plugin] setting user vid=> 17000040
plugin:obsidian-weread-plugin:2 [weread plugin] setting user name=> ε‘ι˜³δΉ”ζœ¨
plugin:obsidian-hypothesis-plugin:2 loading plugin 2023/6/14 14:34:29
plugin:obsidian-hypothesis-plugin:2 Start syncing...
plugin:obsidian-to-anki-plugin:30969 loading Obsidian_to_Anki...
plugin:obsidian-wordpress:92533 loading obsidian-wordpress plugin
plugin:obsidian-wordpress:80149 Object '2'
plugin:copilot:52453 New chain created: llm_chain
3plugin:copilot:54064 Set chain: llm_chain
plugin:dataview:12575 Dataview: all 1390 files have been indexed in 0.876s (1390 cached, 0 skipped).
plugin:obsidian-hypothesis-plugin:2 StartAutoSync: this.timeoutIDAutoSync 138 with 2 minutes
plugin:copilot:54113 Creating vector store...
plugin:copilot:8921 Refused to set unsafe header "User-Agent"
setRequestHeader @ plugin:copilot:8921
plugin:copilot:54120 Vector store created successfully.
plugin:copilot:54095 New retrieval qa chain with contextual compression created for document hash: 8579a750b8c5b8f2cc9228cf50b20ecf
plugin:copilot:54101 Set chain: retrieval_qa
plugin:copilot:54064 Set chain: llm_chain
plugin:copilot:54113 Creating vector store...

Embedded CSS conflicts with default styling

Tested w. Copilot v1.2.1

The following CSS overrides default p and ul styles causing a problem with their margins:

obsidian-copilot/styles.css

Lines 163 to 172 in 5ababd8

.message-content pre, p {
margin: 0;
padding: 0;
}
.message-content ol, ul {
list-style: none;
padding-left: 20px;
margin: 0;
}

Example

screenshot_fgEEFKa6_gh

I edited the CSS to this, seems to fix it

.message-content pre, .message-content p {
  margin: 0;
  padding: 0;
}

.message-content ol, .message-content ul {
  list-style: none;
  padding-left: 20px;
  margin: 0;
}

studybot with sudo lang

Thank you so much for the plugin. it is phenomenal. it would be great to be able to use the functionality of the sudolang pseudo language. especially the ability to use the studybot link, developed by the author, to study the information in the active note and be able to expand with the information that the LLM has.

Support user custom prompts

User should be able to define their own prompts invoked by commands, just like how "summarize selection" and other built-in prompts work.

Feature Requests

Amazing app, but to become really useful I think there are a few low hanging fruits you can integrate! I would recommend using langchain as a framework as it makes it really easy to work with LLMs. langchain is more powerful in python, but there is also a typescript integration.

  1. Full control over the chat and context like in the openAI playground (delete, create or edit chat messages and continue conversations at a later time)
  2. Slice and index notes and save locally using a local vector database server (for example https://weaviate.io/) This makes it possible to search across the whole vault semantically and use context from anywhere at any time. For slicing there are several options built into langchain. https://docs.langchain.com/docs/components/indexing

Feature Request: Configure note save location

Currently "Save as Note" stores notes to the root of your obsidian vault.

For myself I'd like to save them to my media/attachments folder, or some other place so saved chats are grouped together.

Cheers!

More clean UI

Screenshot 2023-06-18 at 13 22 13.

This is with Minimal theme.

For example:
- MODEL SELECTION: I dont want to see model of ChatGPT (hide in under one icon setting)
- STOP GENERATING: Show it only when it is generating something
- NEW CHAT: Put it on top. And add icon that represent something new like new file etc.... this looks like icon to regenerate prompt....
etc....

Plugin is amazing tho! Really cannot wait for offline model!

Support Azure as an embedding provider

Raised in #78

Currently Azure isn't on the list of embedding providers, so you can't have Chat and Embeddings both from Azure. Need to add Azure, but note that the user must deploy a separate deployment in Azure for the embedding model.

Some buttons are obscured when the sidebar is not wide enough

Hello there,

I am pleased to know that you are enjoying the Copilot extension. However, I have noticed that when the sidebar is not wide enough, some buttons become obscured and inaccessible. This can be troublesome, especially when trying to use the extension on a smaller screen or in split-screen mode.

Would you please investigate this issue and find a solution to make these buttons more accessible? One effective solution would be to place these buttons on multiple lines. It would be highly appreciated if you could address this matter in a future update.

Thank you for considering my concern.

Best regards,
Chen

Screenshot 2023-06-09 at 10 57 27

Support local ai, to ensure privacy

I saw that there is a goal to ensure privacy for this plugin. For me this would mean "it works without sending my notes around the globe and use openai".

On the other hand, there are projects that try to mimic the openai interface, and host LLM locally.

Supporting https://localai.io/ could end up in a simple solution for this.

Extend options to "OpenAI, HuggingFace, LocalAI", and allow to enter a local URL

Many Thank Yous and Futures

I've become a fan of chatGPT this week because of this plugin. Many thank yous! It is so cool to have chats entirely within Obsidian.

Question - Do you have plans to implement some of the same kinds of filters that Microsoft offers in the "compose" component of its Edge Copilot?

Cheers Logan!

Plugin not working

Obsidian v1.2.7
Copilot v.1.2
Valid API key, newly acquired GPT+

First try - GPT-4
Console output.
Failed to load resource: the server responded with a status of 404 ()
plugin:copilot:37818 Error in streamManager.streamSSE:
CustomEvent
data: "{\n "error": {\n "message": "The model: gpt-4 does not exist",\n "type": "invalid_request_error",\n "param": null,\n "code": "model_not_found"\n }\n}\n"
isTrusted: false
source: SSE2 {INITIALIZING: -1, CONNECTING: 0, OPEN: 1, CLOSED: 2, url: 'https://api.openai.com/v1/chat/completions', …}
bubbles: false
cancelBubble: false
cancelable: false
composed: false
currentTarget: null
defaultPrevented: false
detail: null
eventPhase: 0
path: []
returnValue: true
srcElement: null
target: null
timeStamp: 89358.5
type: "error"
[[Prototype]]: CustomEvent
getAIResponse @ plugin:copilot:37818

Second try - GPT 3.5
Console output

Error in streamManager.streamSSE:
CustomEvent {isTrusted: false, data: '{\n "error": {\n "message": "You exceeded … "param": null,\n "code": null\n }\n}\n', source: SSE2, detail: null, type: 'error', …}
data: "{\n "error": {\n "message": "You exceeded your current quota, please check your plan and billing details.",\n "type": "insufficient_quota",\n "param": null,\n "code": null\n }\n}\n"
isTrusted: false
source: SSE2 {INITIALIZING: -1, CONNECTING: 0, OPEN: 1, CLOSED: 2, url: 'https://api.openai.com/v1/chat/completions', …}
bubbles: false
cancelBubble: false
cancelable: false
composed: false
currentTarget: null
defaultPrevented: false
detail: null
eventPhase: 0
path: []
returnValue: true
srcElement: null
target: null
timeStamp: 29311
type: "error"
[[Prototype]]: CustomEvent
detail: (...)
initCustomEvent: Ζ’ initCustomEvent()
constructor: Ζ’ CustomEvent()
Symbol(Symbol.toStringTag): "CustomEvent"
bubbles: (...)
cancelBubble: (...)
cancelable: (...)
composed: (...)
currentTarget: (...)
defaultPrevented: (...)
eventPhase: (...)
path: (...)
returnValue: (...)
srcElement: (...)
target: (...)
timeStamp: (...)
type: (...)
get detail: Ζ’ detail()
[[Prototype]]: Event

Add support to use settings font-size

I have my font in custom size but however is not applicable to the AI chat window font.

image

It would be great if we have an option to use this font size parameter for accessibility

iPad Pro support

It is currently impossible to install the plugin on an iPad Pro.
Please, please, please...
Is there any chances for it to be implemented in the future?
Thanks for the amazing work!

Option to open Chat window in a new pane instead of the right sidebar?

My right sidebar is a little narrow, and so when I open the Copilot chat feature, it's too small to be comfortable. Then I have to reach for the mouse to resize it, and when done, drag it back to its original size. Little thing, but...

It would be really nice if an option existed to open the Chat window in a larger full size pane e.g. 50-50 split screen view with the note on the left and the Chat on the right...

Migrate to LangChainJS

This is a significant change to this plugin and will make a lot more features possible, such as better conversation memory, unlimited context, augmented retrieval, and much more. LangChainJS has many existing features for the functionalities I already implemented in this plugin as well.

Will bump major version after this migration.

Note as context

I may have misunderstood something, but how Note context is intended to work? With note open, selecting 'Note as context' -button only promotes error notification 'Please check your API key and credentials'

Feature: Estimate cost

It would be great if there was an estimated/running cost displayed in a chat, or an estimated cost when in QA mode to upload the note.

Mobile support

Great plugin! Just noticed that this plugin is not supported in mobile app. Is there a plan to support it? Thanks!

Feature Request: Enable context on specific notes

Would it be possible to specify the note that the AI is fed as context, rather than only being able to give it the active note? With the way my obsidian is laid out, having Copilot on a separate tab is more ideal than having it in its own panel. Thanks!

Persist vectorStore to a file for use across sessions

Description

Right now, it appears that the vector store is generated 1) whenever the user selects the 'Use Active Note as Context' button for a note that has either not been used as context this session or has been updated, and 2) whenever the session is reset or reloaded. (Link to code)

Persisting the vector store map to the vault would enable embeddings to be reused to reduce the number of hits to the embeddings API.

Suggested Fix

Here's my suggestion on how this could be addressed; feel free to completely ignore everything after this point. (I am, after all, just an internet stranger)

Since the vector store currently is a Map<string, VectorStore> object, it should be possible to save it to a JSON file from the onunload() method in main.ts, and then load it in onload() in the same file.

However, persisting the embeddings would also cause the map to bloat to an absurd degree, so a garbage collection function should also be added that removes vector stores that are out of date. A general outline for this function could be:

  • Compile a list of hashes for all notes in the vault (List 1)
    • Since this would be on-device, there would be no info leakage
  • Get a list of the hashes used as keys in the map (List 2)
  • Create a third list of hashes that are in List 2 but not in List 1 (i.e. it is a hash of a note that no longer exists)
  • Remove all vector stores with their keys in List 3

I'm not sure where the best place to run this would be; my gut call is that it could be triggered as part of the Active Context button, but that might cause lag or latency for larger vaults.

A way to limit this latency might be to update the key format to include the note title (such as {title}-{hash}) so that List 1 could be filtered down to only the relevant notes prior to calculating out all of the hashes. And then if the title changes, or a note is deleted, that vector store can be marked for deletion right away. This would limit the scope of garbage collection to only the notes that have been used as Active Context in the past.


By the way, this project is absolutely awesome. Not only does it work great out of the box, but the code is also much more readable than some similar plugins. So kudos, and thanks for building this!

Model request failed

I'm getting an error message that says "Model request failed: TypeError: Cannot read properties of undefined (reading 'llm')" when I try to select some text. Here's a screenshot of the error:

Captura de Tela 2023-06-20 aΜ€s 19 41 27

I've already updated to the latest version and tested the API key in another plugin, but the issue persists. I'm not sure what else to do.

I would be grateful if someone could help me solve this problem.

Add azure models

New models to add

  • gpt-3.5-turbo-16k
  • gpt-4-32k
  • claude-100k
  • azure gpt-3.5-turbo

Increasing token limit breaks co-pilot

When increasing the token limit of co-pilot above 1000, I do not get any response from chat-gpt, also not after a restart. Setting the token limit <1000 fixes the issue again.

Specify used template when saving a conversation

I want to be able to specify in the settings a template that will be used when I save as a note.
My use case I that I want to add a tag in the front matter so I can retrieve all my chats easily with data view.

There may be other use cases.

Feature: store API key separate from `data.json`

For those of us who use git to backup and sync our Vaults, it would be nice if we could store the API Key outside of the data.json so that it can be .gitignore'd separately from the rest of our settings.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.