The plan for a user's autonomy / Server infrastructure
Hey folks,
thanks for your input on this.
What is the goal?
Our promise is to keep the user in full control over his data and to allow effective/asynchronous sharing of content-recommendations, content-associations and metadata.
Therefore there is a need to host the data somewhere in the cloud, but in the control of users.
Means that as soon as a server is needed, the idea is to make it as easy as possible for users to set up their own server with our firmware (i.e. a docker container), that handles all the data storage/processing.
Secondary effect.
Making this form of decentralisation the default architecture could contribute to a more decentralised internet infrastructure, since we also reach many non-technical internet user.
Because the data of a user is always available via their servers this could also build the foundation for other decentralised projects to reach broader use.
(i.e. P2P social networks or decentralised web search engines like Yacy/Sersia)
We hope that it would lead to a shift of ecosystems that form around users, not around platforms.
This current, centralised and platform focussed, circumstance leads to unhealthy amassing of power on the web.
The architecture chart as seen below are how I imagine the system.
Stage 1:
In the first stage, its just a client side software, the browser extension.
There is no communication with the outside world needed yet.
Currently the used DB is PouchDB.
Stage 2:
Providing a server that handles all the the logic of syncing with the attached services and processing of data. (like building the search indexes or analysing for related content)
It also has built in the first version of the communication API (called "Ragnorok-Module" as an homage to Daniel Suarez' Daemon & Freedom Books ;) )
In this stage this API is there to communicate with the different clients a user uses as well as provide a web-based interface the user can access from anywhere on the web.
Here we possibly have to sync an index to the local machine in order to provide off-line support.
Stage 3:
As soon as the system is working for the users themselves we update API to be able to talk to other APIs in the network and exchange information, like content recommendation or provide searchable indexes of the pages other users visited.
In this stage people can start following each other and therefore build circles of trust.
I have a couple of questions:
- How seemless can the process be made for the user to setup the servers? (Important for non-technical users, as most of ours will be)
- What kind of problems do you see with this architecture?
- How can we make the code that runs on the server replicatable and agnostic from the server choice?
- What storage solution do you know that is capable of running/syncing in an extension as well as on the server? Maybe also including a built in permission system to handle access. Afaik Pouch/CouchDB don't have that. remotestorage.js has for example.
- If we use a system like the IPFS/IPDB, can we also host and run code there?
- As far as I know, searching in encrypted datasets is not yet mature, so the question is, it it possible to add an encrypted accesslayer that would effectively sandbox data and its processing, making it unavailable to outside people without the right credentials?
Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.