Akeru is an open source AI platform built on top of the Akeru AI edge network. The network runs as a Bittensor Subnet, providing a transparent, safe and highly available AI capacities.
Validators shoud ping every minute or at a fixed interval each miner to understand the distance to them. This will help them know where to route a demand in order to be sure the subnet edge network is efficient in serving requests.
We need a way to allow users to upload documents. This issue is to research what is possible:
Documents should be stored in a format that is retrievable by agents.
Documents should have permissions associated with them. Think of a flexible permission system where users can move documents to different threads or lock it to a specific thread.
Do we store documents + embeddings or just embeddings?
The subnet being now deployec has the capacity to run queries to llama 2 and mistral 7b. Connec them as a data source in the LLM choice of the akeru API
The verification object in the layout's metadata helps to make the site have a proper indexing with web crawlers when people look for our website. We would need to obtain keys from google, yandex and yahoo to set this up in the codebase.
The README is the first point of interaction for those interested in contributing to Akeru.ai. As the project evolves, ensuring that potential contributors can easily find and access the Contributing Guidelines is essential for fostering community involvement and clarifying contribution processes.
Objectives
Enhance Navigation: Insert a direct link to the Contributing Guidelines to simplify access for new contributors.
Clarify Contribution Process: Make it clear how new members can start contributing, emphasizing transparency and ease of access.
Encourage Community Engagement: By providing clear directions and easy access to the guidelines, we aim to motivate more users to contribute.
Expected Outcomes
A more navigable README that effectively guides potential contributors to the information they need.
Increased contributor engagement due to improved clarity and accessibility of contribution resources.
This update is a crucial step towards making our project more open and accessible to new contributors, thereby enhancing our community-driven development approach.
Since adding the subnet to the project, contributing can mean different things. We will start to have contributors to only the UI, only the API, only the subnet or even contributors who want to run the full project from end to end. We need better documentation and guides on how to get started contributing.
Acceptance Criteria:
Build a contributing section for the UI.
Build a contributing section for the API.
Build a contributing section for validator and miner code.
We need a basic design for the landing page. We could try our hand at doing it ourselves or we could use some tweaked landing page from the figma community pages
As a left over from the subnet template, most configurations options are passed from command line arguments. For easier deployments and decoupling from the validator and miner classes, let's move them into env files instead.
The README file is the first point of interaction for potential users and contributors to Akeru.ai. As the project grows and incorporates more features, it's crucial that the README accurately reflects the capabilities and mission of Akeru.ai, providing clear and engaging content that facilitates understanding and involvement.
Expected Outcomes:
A comprehensive, well-structured README that serves as an effective entry point for engaging new users and contributors.
Enhanced clarity on technical aspects and project roadmap, facilitating easier onboarding and collaboration.
NIBittensorLLM is a decentralied LLM build on top of bittensor. It would be a great alternative to use on sprout to gpt-4 and other close source models.
AC:
produce a plan to integrate NIBittensorLLM
investigate the security needs to hold tokens to call the model. Are tokens needed? How can this be platformized
For some reason, certain ISPs do not allow the akeru.ai domain, probably because Vercel has been blacklisted. However this solution, by this person can prove to be a viable fix for this situation.
The .vscode folder has user-specific settings for VS Code which aren't needed, and by adding .vscode to the .gitignore, we make sure that these settings stay local and don't mess up our repo or cause problems between contributors' local dev environments.
Acceptance Critera:
Remove the existing.vscode file.
Make sure that file is not added again in the future.
The message endpoint currently is only accepting incoming messages, but does not return assistant messages.
Call proper LLM adapter to send the old message and receive the answer. (Don't hard code GPT-4 only. Get the model of the thread from the DB and switch on the right adapter if necessary).
Store the answer in the database.
Send the answer to the client.
Explore elysia streams to stream the answer back. Also request a stream from the GPT-4 API if it's not already one.
There are multiple mistral models available. Their advantage is that they are open source. Let's figure out a cost efficient one to use as an available LLM on sprout:
figure out if we can self host at a reasonable cost during development
We should standardise the role of Assistants with assistant instead of agent as it is what's commonly used in OpenAI's context. It also makes it easier to develop OpenAI Adapters.
Add in instructions to be part of Assistants to give Assistants more context on what they should do.
Add in model to be part of Assistants to allow for adaptive LLM model usage.
When on certain mobile screens, the background of the button has the same color as the gradient section of the page's background such that it fades in together and becomes unnoticeable. This is an accessibility issue:
We currently do not have a pull request template for our repository. Adding a template would help streamline the pull request process by ensuring key information is included.
As left overs from the subnet template we have inherited a lot of technical dept of a coding style that is far from our own. Let's clean up the OOP patterns. We can keep some of them but inheritance is overly done for our style and offuscates the real functionalities of the code. Let's move closer to a functional style
We will need soon to create dashboard UIs, a landing page etc. currently sprout is a backend only application. This issue is to move to a monorepo setup.
Acceptance Criteria:
explore bun support for monorepos
move the current code to a package under a mono repo
update the readme properly
create scripts at the top level to start the project quickly.
Currentl calling the chat endpoint on validators results in a 5-10 second delay since the LLM workload is slow. We should implement streaming methods that are equivalent in both the miner and the validator to let them act closer to the open AI API
right now we allow communication using a bearer token. This works for a while but eventually we need to move away from hard coded secrets to instead use something more formal like an allow list mechanisms.
We need to setup the frontend scaffold for the entire application.
We already decided to setup a few things in the codebase like app router and tailwind, but we would be doing a few things with streaming and a few other stuff that I think @GuiBibeau will be positioned to handle.
As of now, we only have this design to get our landing page and get inspiration for our waitlist page: