jeffser / alpaca Goto Github PK
View Code? Open in Web Editor NEWAn Ollama client made with GTK4 and Adwaita
Home Page: https://jeffser.com/alpaca
License: GNU General Public License v3.0
An Ollama client made with GTK4 and Adwaita
Home Page: https://jeffser.com/alpaca
License: GNU General Public License v3.0
Sometime I want to keep the model loaded indefinitely and sometime I want to immediately unload the model(based on the server I am using). It would be nice if there was a settings which would allow to set the keep_alive option while calling the api
A integer/textbox input in the settings page which will be used to set the keep_alive option in the API calling phase
Currently no alternatives are there except manually unloading those models from the CLI
The most recent part of the conversation is hidden below the bottom of the screen, unless you scroll down to see it.
Hello, first I'd like to thank you for this program, it's simple enough and looks good, I've encountered the following bug opening.
flatpak run com.jeffser.Alpaca
Traceback (most recent call last):
File "/app/share/Alpaca/alpaca/main.py", line 43, in do_activate
win = AlpacaWindow(application=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/share/Alpaca/alpaca/window.py", line 1001, in __init__
css_provider.load_from_path(os.path.join(self.app_dir, "share/Alpaca/alpaca/style.css"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen posixpath>", line 76, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType
Hello, it's me (again)
When creating a chat, or when we launch the app, it would be nice to have an interface like this instead of a blank frame
I was asking for a better text entry, like this it's not so far from the Gnome interface guideline (the screenshot is from the Flare app)
Big thank !
This is something that sound fun to do, I could extract the transcript / subtitles, description and even thumbnail from a video and feed all of that to a model, I might do it, maybe after 1.0.0
It would be useful for the user to be able to delete messages per-message
When making a new chat, instead of asking for the user to name it, give them a new, unnamed chat screen, which doesn't yet correspond to a chat button on the left. Allow them to send their first message to a model. Once the LLM finishes responding, have the app ask the model to name the chat in the background, and make the chat button appear on the left with that name.
See ChatGPT for reference.
The app is in its infancy but it already looks and feels great! You might want to take a look at applying for GNOME Circle, as their review is usually really good at picking up design, functionality and accessibility issues that we might miss by using the app as usual.
I believe they usually avoid having ties to AI projects, but since this can and is mainly intended to be used locally and with open models, it has the potential to be a great addition to the project's community curated apps, which could give it even greater visibility!
This is an amazing tool to work with very simple and pretty UI but I miss some keyboard shortcuts like Ctrl+C to copy things and Ctrl+Enter to send a message or maybe only enter or something else + Enter.
I am using openSUSE Tumbleweed and the app is installed as Faltpak from Flatstore.
The current text input box can be seen below.
Consider increasing this to match the padding in the "sent" box, seen below.
For references, I've looked at libadwaita chat apps Flare and Fractal, as well at the new layout on chatgpt.com.
Love the app!
This is definitely not a priority nor a criticism of the current icon. It looks good, but I would suggest looking into GNOME's app-icon-request repo, as their designers could get a really professional-looking icon (and I believe one of their designers did a sketch that could be used as a base a while back, although I couldn't find it). I thought about opening a request there, but it would be something really rude to do without the developers' approval.
I was going to design one to suggest, but my icon design skills are really lacking after changing careers a while back 😅
I have a few ideas for moving some buttons around to simplify/improve the look and feel of the app. If these should be separate feature requests, let me know. I didn't want to clutter the page with too many issues. Below are my ideas for how to reorganize the UI.
I can't figure out how snapcraft works, if someone knows please help me
excuse my English, I'm French...
Hello, thank you for this amazing app !
Here is my feature request :
http://localhost:11434
when activating remote instancecurl -fsSL https://ollama.com/install.sh | sh
then ask him to install model from ollamaA big thanks again for your work ! Love it !
I suggest logging app events to standard output. Would a PR for this be welcome?
The current layout of the left sidebar is shown below.
Consider placing the inactive chat text directly on the left sidebar background, as seen in Fractal, below. It could also be good to decrease vertical spacing between chat names. Spacing is good for visibility, but right now Alpaca has more than gnome settings or Fractal.
In addition, it could be nice to hide "delete" and "rename" icons behind a "three-dots" button or put them in a right click drop-down menu.
When creating a new chat, clicking create without entering a name will have the same effect as clicking cancel.
I suggest either using a default name such as "Untitled Chat" or keeping the modal open with a message.
Opening Alpaca for the first time, doesn't let you close the app if you haven't configured ollama yet.
No matter if you try to close it using the "x" on the window, it will spam the "Failed to connect to server" banner without exiting the app itself.
Well, of course it's not impossible, Ctrl + Q or force quitting the app would work, it's not like it crashes or gets stuck, it's that the "x" button on the window doesn't close, as should be expected.
At least markdown but maybe also PDF
I don't know if this is a bug or not, just checking. The original llava LLM-image model allows you to send images. llava with llama3 for example, does not allow to send images, while I do think it's technically designed for this.
Edit: this is the case with moondream too. It has image understanding, but only original llava allows sending of images in the GUI
It would be useful if generation of new could be stopped before completing.
Currently downloading a model will prevent all other interaction with the app. This includes closing the app.
I suggest a "Close and continue downloading" button as well as a "stop download" button, perhaps renamed to match the GNOME guidelines.
Hi,
can you provide a Windows version?
Sometimes, on a fresh start of the app, my previously downloaded models and chats aren't present. When I reboot the app, they come back. Sometimes it takes more than one reboot to fix.
Hello, great app!
I have inserted http://localhost:11434
in the URL of remote instance to use a local Ollama instance, I don't know if it's the correct way, but it does't connect. If I insert that url in a browser I get Ollama is running
so it's working.
Now it's not super clear how to use a local instance vs the ollama running in the flatpak sandbox.
I m rocking a Fedora 40 install and updated from the Alpaca 0.6.0 to the 0.7.0 version.
When pressing the icon to load models, nothing happens.
It worked just fine with the 0.6.0 version.
I don't know how to view the app logs tho.
If you need specifics, i can get them for you but i can't find documentation on how to do it.
Thanks
It might be possible to install Ollama locally within the Flatpak sandbox using the extra-data
source type.
I suggest that alongside the Ollama connect option, the user should have an option to "Setup sandboxed Ollama". Ollama could be downloaded in a similar manner to models, and be automatically started when the app launches if installed.
See:
https://docs.flatpak.org/en/latest/manifests.html#modules
https://github.com/ollama/ollama/blob/main/docs/linux.md
The current behavior is that Enter inserts a newline, Ctrl-Enter sends the message.
I suggest that the behavior be changed so that Enter sends the message, and Shift-Enter inserts a newline. This would bring the behavior in line with mainstream instant messaging apps.
KDE has an application called Alpaka: https://invent.kde.org/utilities/alpaka
Just like this application, it is also an Ollama client, except it's using Qt6/Kirigami instead of GTK4/Libadwaita. The names have a single letter difference and are pronounced the same, which is bound the cause confusion.
I see that importing and exporting chat history is on the timeline, however I'd like to suggest the following format:
(new lines encoded as \n)
[<iso8601 timestamp>] <<User or Pal>> <message>
Example:
[2024-05-19T18:43:45.598Z] <User> Hello
[2024-05-19T18:43:50.598Z] <Pal> Hi
This would be both human readable and easy to parse
I suggest the following:
If the user is currently scrolled to the bottom, the scroll should remain at the bottom despite new responses.
Not necessarily a priority thing, but it would be nice to have some improvements to the Flathub page, mainly:
This is how it currently looks in GNOME Software (the text is in portuguese though, sorry):
It would be very useful if all prior sent messages could be edited.
Currently, only the latest tag is downloadable. The ability to download other tags for each model is important.
When the remote connection setting is enabled and the remote URL is not valid, Alpaca becomes unresponsive
Currently, sent input text is placed in a card, which is placed in another card which contains the chat, which is placed on the background.
Consider removing this "whole chat" card, and instead place the output text directly on the background.
This design is shared by occasional descriptive text in gnome settings (below) and in the layout of chatgpt.com
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.