GithubHelp home page GithubHelp logo

Private sites about zeronet HOT 36 OPEN

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024 12
Private sites

from zeronet.

Comments (36)

OliverCole avatar OliverCole commented on April 28, 2024 4

@anoadragon453 Absolutely, I started outlining some ideas at https://github.com/OliverCole/ZeroNet/wiki/Private-sites-on-ZeroNet, but haven't had time to work on it further. Let me know if you want to talk over what I wrote - it wasn't finished.

The problem is it gets complex fast - when you remove a user, do you want to re-encrypt the site content to immediately deny them access to the data? Or just encrypt changes from that point? How do you make all that work with the delta updates?

But I think there is huge value in supporting a simple use case - someone publishes some leaked documents with a single symmetric key, which is revealed to media/the public at a later date. I would love to see Zeronet do that!

from zeronet.

anoadragon453 avatar anoadragon453 commented on April 28, 2024 3

when you remove a user, do you want to re-encrypt the site content to immediately deny them access to the data? Or just encrypt changes from that point? How do you make all that work with the delta updates?

If someone's already got the site contents, they can copy that off somewhere. If they initially had access, you should consider anything they had access to compromised.

Don't worry about encrypting anything here, that just adds complexity. just stop sending them deltas.

from zeronet.

alxbob avatar alxbob commented on April 28, 2024 2

Maybe using something like this https://github.com/pyca/pynacl used in openbazzar

from zeronet.

anoadragon453 avatar anoadragon453 commented on April 28, 2024 2

I think this could open a lot of really novel use cases. I also think the authenticated method is the best solution. It requires much less invasive changes to ZeroNet, and because the "someone publishes some leaked documents with a single symmetric key, which is revealed to media/the public at a later date" bit can already be done with ZeroNet really. Just encrypt the file locally and share the file on a zite.

The password-based authentication method can be added, though imo pubkey-based first sounds like the best solution, as I imagine that'll be the more widely used one.

The per-file encryption feature is nice is theory, but quite computationally expensive and not scalable. Doing it on an authenticated request basis is much cheaper, and per-directory authentication with authentication is trivial (like web servers do so today).

There is currently a 0.169BTC bounty (or $1261.37) open for this, and adding private zites would start to allow for paid content, and giving ZeroNet sites an avenue for legitimately distributing copyrighted works over an entirely decentralized system.

from zeronet.

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024 1

the problem is the zip file is not suitable for multi-user sites and per-user encryption is not really works for many users (100+)

patch command: to greatly reduce bw usage when one of the files modified instead of re-transfer the whole file only send the changed lines (diff)

from zeronet.

anoadragon453 avatar anoadragon453 commented on April 28, 2024 1

Couldn't this just be:

  • Site admin has a access.json file in site root that denotes which ZeroNet public addresses can access this site
  • When peers ask for the files of this site, they need to sign their request with a private key that matches a whitelisted address
  • if they do, the peer sends the file over

from zeronet.

krixano avatar krixano commented on April 28, 2024 1

Yeah, @anoadragon453 , not sure why people are making this more complicated than it needs to be... and if I hear one more freaking thing about IPFS I think I'm gonna freak out, lol.


Edit:

ZeroNet already does the majority of what IPFS does in a much simpler, easier to understand, and more versatile, way. Being more complicated does not make your system more functional or better.

Some people on here want to move ZeroNet over to using IPFS, and this is completely nonsensical imo. ZeroNet is ZeroNet because it's not IPFS, because it's simpler. These people use the excuse of not reinventing the wheel, but in case these people haven't noticed, all the code to get ZeroNet functioning is ALREADY WRITTEN and Working. So they conveniently skip over the fact that we would be throwing away 4 years of work to move ZeroNet over to a system that's more complicated than it should be, imo.

And that paragraph above is even if we even accept the notion of "reinventing the wheel" as being bad. Btw, Calculus was invented twice. The printing press was invented twice. As far as I'm concerned, the wheels we have today are nothing like the wheels when they were first invented. "Reinventing the wheel" is not bad.... it allows you a chance to start from scratch and choose a different route than what you already had chosen. What happens if we move over to IPFS and then we decide that we want to do something completely incompatible with IPFS? What happens if ZeroNet outlasts IPFS? You want to see a bad example of "reinventing the wheel"? Look at what browsers are doing!

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

I noticed you haven't labelled this one with help wanted. Is that intentional? Do you have a particular design in mind you could share?

This would be something I'd love to implement!

from zeronet.

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024

@OliverCole AES and ECIES encrypt/decrypt functions added some months ago: #216
It could allow to encrypt every content on the site and decrypt after it downloaded on the client side.

It's not ideal for every application case, there is many ways to implement private sites it's all depends on what you want to do? Filter IP-s who is connecting to your site? Add password protection? Only encrypt part of the sites?

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

@HelloZeroNet Yeah, I think we have similar thoughts on this.

To me you want to have the public site unencrypted, and then an encrypted 'subdirectory', or even more than one. This means the owner can have public content, and how to contact them perhaps, and then the private stuff all under one key/bit domain. They can always choose to keep the public site blank if they wish.

We should think about each private 'subsite' as a site itself though - with it's own content.json etc. I feel like it should probably be rolled up in a zip file to avoid any visibility of the private site structure, but then that might make publishing difficult?

Some other thoughts:

  • You can support various access methods:
    • Simple 'password', entered into ZeroNet managed UI over a site - stretched to the shared decryption key
    • Certificate access. This would require the owner encrypting the decryption key to each cert, and publishing the lot, but it's super easy for the users, they just know that a given certificate grants access. This does also mean you can rotate the underlying decryption key - on every site publish if required!
    • Bitcoin integration? Further in the future, what about something that would publish copies of the decryption key, encrypted to the public key of Bitcoin addresses which had sent it an amount of Bitcoin.
  • You could manage the whole workflow quite neatly - users could write an access request, which would be encrypted to the site owner and published. The site owner could have an interface for reviewing them and granting access where needed.
  • I'm not sure IP filtering is worth considering because it's not guaranteed for a user, and some people are deliberately using ZeroNet through Tor.

It's definitely not ideal for all use cases - and in particular, any user with access can then distribute (or reveal) that access to other users, regardless of whatever UI we put over the top. But I think it would be pretty good within the limitations of ZeroNet.

from zeronet.

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024

We already using openssl, pynacl just an alternative to it and it does not support bitcoin cryptography that we using most of the time

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

So @HelloZeroNet, I want to have a crack at this - maybe just starting with the encryption/publishing part, for the 'simple password' case. Are you happy to merge if we come up with something good?

My first thought is on the metadata - we could add this in the files list in content.json, but that's not strictly accurate. Alternatively we could call it an include.

But I think a third list might be a good idea, eg:

{
 "address": "1evJheeFpVQkHaZdAzEMj75w5T14ogZWT",
 "description": "My example site",
 "files": {
  "index.html": {
   "sha512": "dc5af04d2cde806f4bab9a0dd6b09d367c9f2dd51dd5f22a36c2dc56aa6e925e",
   "size": 428
  }
 },
 "private": {
  "members": {
   "description": "Private members area - contact johnsmith on ZeroMail for access",
  }
 },
...

Signing and publishing would then require the encryption key (not ZIP password! 😝), stretch it, and encrypt members.zip to members.zip.zeronet - the encrypted file gets published and the cleartext remains on disk.

members.zip would contain content.json at the top level, which would behave like any other subdirectory content.json. And members.zip.zeronet would be eligible for distribution by users, along with the rest of the site.

Later on, when we get to retrieving, a request for 1evJheeFpVQkHaZdAzEMj75w5T14ogZWT/members/ would trigger ZeroNet to request the key for members.zip.zeronet, decrypt it on disk and then proceed as normal.

from zeronet.

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024

Not sure if storing everything twice is a good idea, probably a password/publickey based peer authorization is easier and would allow to remove users.

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

How do you mean twice? The only thing that would actually be shared out would be the encrypted zip.

Are you proposing the private bit is unencrypted, but only shared to people who should have access? Could you outline how that would work?

from zeronet.

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024

To be able request any file from peers you need to authenticate yourself using publickey algorithm.

Other possibility: when requesting any.jpg using http and it's not exists, then it's looks for any.jpg.encrypted and try to decrypt it using they AES keys you have associated with the site/directory.

This way the sign/publish method would remain the same, also allows multi-user sites and only requires some smaller modifications in the UiServer/SiteStorage (for sql imports) and new API functions to add AES keys.

from zeronet.

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024

I think there is two possibility:

Per file encryption

Anyone able to receive files/updates, but the files are encrypted

Pros:

  • Does not relies on connection security
  • Allows easy per-directory encryption
  • Allow anyone to host files without knowing it's content (eg. paid hosting)

Cons:

  • Performance: has to decrypt the files on the fly or build a cache
  • Not possible to remove users from the site
  • You can spy on site activity
  • (Future) Patch command can be problematic

Connection security

You need to authenticate with other users connection before receive any updates or files.
Pros:

  • Not possible to spy on site activity
  • May be possible to create new password and remove users
  • Files stored the same way as any other site

Cons:

  • Relies on connection security (SSL/onion): MITM can be a problem
  • Per directory encryption is harder, but may be possible

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

You can spy on site activity

Are you talking about metadata, like file names/sizes etc? That's partly what the zip file is for - just to bundle it up and avoid that analysis.

(Future) Patch command can be problematic

I haven't read anything about this - what is it?

You can actually still achieve adding/removing users with per-file encryption, by picking a random symmetric key, encrypting the files (or zip file), and then encrypting a copy of that key to each user, exactly the same way PGP does it. If you want to add a user, simply publish an additional encrypted copy. Removing a user means picking a new random key and encrypting that for the remaining users.

I think per-file encryption is the best, but tell me about the patch command?

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

multi-user sites

You mean public zeronet proxies? True... but are you thinking that if it was per-file, it would be decrypted in the browser? Otherwise you still have to trust the server with the key.

per-user encryption is not really works for many users (100+)

Surely that's only (number of users * encrypted 256 bit key)? That's 32 users/kilobyte, minus some for overhead - doesn't seem insane to me.

Patch command is interesting.

Something else to consider is whether it is desirable for users without access (in the per-file model) to already be distributing the encrypted files, without access. Think about the scenario where Assange posted those diplomatic cables encrypted in bulk, as an insurance policy... you would want them to be replicated by interested parties, even if nobody but the author had a key.

I would say it would be good if ZeroNet supported that use case. In the connection security model, it wouldn't be possible because nobody would have access.

from zeronet.

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024

Multi-user sites: interactive zeronet sites where every user has his/her own files. (ZeroTalk, ZeroBlog, ZeroMail, etc.)

If the files are encrypted and you want to remove an user, then you have to re-encrypt every file and everyone has to re-download all of them.

If you want to make sure of encryption you will still able to encrypt the data using AES+ECIES functions (like ZeroMail)

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

Well, there is a way around having to re-encrypt all the files - version and increment the keys.

  1. Files A and B (in the main part of the site) are encrypted with K1, and U1 knows K1 because the site owner encrypted a copy of it with KU1.
  2. U2 publishes file C, and encrypts it with K1.
  3. Site owner 'deletes' U1. Site owner generates a new shared key K2, and writes a new table, with a copy of K2 encrypted to KU2, KU3 etc.
  4. From this point, K1 is considered 'compromised', and new and updated files will only use K2.
  5. Site owner makes a change to A, encrypts it with K2.
  6. U2 publishes new file D, encrypts it with K2.

So U1 can still access the files as they were when they had access (until they're republished anyway), BUT that's OK because they always had access to those files, and could have taken a copy at any time.

Patch command

Could we make this work by simply encrypting the patches?

  • In the basic scenario, you simply send the patch encrypted with K1. Each node decrypts the file, applies the patch and re-encrypts.
  • In the advanced case where you've deleted a user, and are updating a file for the first time since, you encrypt the patch with K2, and set a flag to say that each node will decrypt the file with K1, apply the patch, and re-encrypt it with K2. Every node has to repeat the work, but that's not so bad.

from zeronet.

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024

New key release could work, but then you have to also keep the old keys in order to able to decrypt the old contents. ECIES takes more space, than symmetric encryption: 225 Bytes base64 encoded, so 10000 user = 2.2 Mbyte new data added to site when someone is removed.

It could be possible to encrypt the patch with some overhead: decrypt patch -> decrypt the whole file -> apply patch -> encrypt the file -> check sha512

It would be nice to check how other projects do this (syncthing, bittorrent sync, Tahoe-LAFS, etc.)

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

Actually, thinking about the patch command... you said the zip file wouldn't work because of the patch command: the patch could simply include a path into the zip file. It would be interesting to do some experiments and see if you could deterministically produce the same patched encrypted zip file on multiple hosts/OSs. It should be possible though.

But anyway, there are a lot of questions and complexity here, and we'll never get the right answer straight away. Why don't we implement the simplest possible idea: single shared AES per-file encryption (ie, image.jpg.encrypted), get it working with the patch command and new key management GUI, and get it out there to see how people use it? Or indeed if anyone does actually bother using it at all!

That way we avoid the complexity of adding/removing individual users, managing access etc, and we don't spend a bunch of time building something people won't use.

from zeronet.

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024

The per-file encryption is not compatible with patch command because it's not a good idea to re-use the IV. if we add a new feature we have to support it forever, so it's need to be as flexible as possible.

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

By reuse, you mean it's not a good idea to encrypt the first version of the file with {K1,IV1}, encrypt the patch with {K1,IV1}, decrypt the file, make the change, and then re-encrypt it with {K1,IV1} again? You're right, that would be bad.

However, we can send IV in the clear without risk. So we would need to use a fresh IV for the patch encryption, and the patch would need to contain another fresh IV to use after the patch has been applied. We might want to include checksums for both plaintext and ciphertext inside the patch.

However, IV presents a bigger question for multi-user sites - we would need to use fresh IVs for user publications, and publish those along with the ciphertext. The sqlite DB only exists locally, and never leaves the machine, right? Or do we need to think about that too?

In fact, if we don't use zip files, we have to publish IVs per file too. On a site with a lot of files that could eat a lot of entropy from the pool.

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

It's probably worth trying to put this all together in a wiki page that shows all the algos, keys and messages... any thoughts before I get started?

from zeronet.

HelloZeroNet avatar HelloZeroNet commented on April 28, 2024

I think it's fine to put it here

from zeronet.

 avatar commented on April 28, 2024

In fact, if we don't use zip files, we have to publish IVs per file too. On a site with a lot of files that could eat a lot of entropy from the pool.

The IVs can be put inside the file along with the ciphertext and the MAC. Patches could send a single random value and the IVs will be derived using HKDF, but it shouldn't really be a problem either way.

from zeronet.

anoadragon453 avatar anoadragon453 commented on April 28, 2024

Glad to see there's some talk on this. I'd like to throw in the idea of enabling the ability to create multiple passwords/secret addresses (a la Tor's HidServAuth config command [spec]) to give out to multiple trusted users.

That way if a password or the address were ever compromised, the site owner could simply disable the compromised user's address/password, while everyone else's would still work.

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

I likely don’t have time to work on this any more, so anyone should feel free to have a go.
All I would suggest is making sure to support various ‘versions’ of private sites - so that if we want to support more advanced use cases/cryptographic structures later we still can.

I didn’t know there were bounties on ZeroNet issues though - is there a link?

from zeronet.

anoadragon453 avatar anoadragon453 commented on April 28, 2024

@OliverCole https://zeronet.readthedocs.io/en/latest/help_zeronet/donate/#private-sites

from zeronet.

0zAND1z avatar 0zAND1z commented on April 28, 2024

Have you considered using MoiBit(https://www.moibit.io) to host private/permissioned site files?

An IPFS based private swarm/network with read guarantees to a limited number of private users should do the job? Hope this approach helps. Happy to contribute.

from zeronet.

OliverCole avatar OliverCole commented on April 28, 2024

@anoadragon453 This comment explains some of the complexities: #62 (comment)

from zeronet.

krixano avatar krixano commented on April 28, 2024

Well... there's no solution for preventing a person from having data they already got after they are removed from the allowed people. So... you'd just re-encrypt the data, or prevent the person from receiving deltas (btw, ZeroNet already does a similar-ish [but not really] thing - if a zite owner adds someone to the permissions for their zite to disallow them from doing stuff, they won't be able to store zite data anymore).

from zeronet.

rujash7 avatar rujash7 commented on April 28, 2024

when you remove a user, do you want to re-encrypt the site content to immediately deny them access to the data? Or just encrypt changes from that point? How do you make all that work with the delta updates?

If someone's already got the site contents, they can copy that off somewhere. If they initially had access, you should consider anything they had access to compromised.

Don't worry about encrypting anything here, that just adds complexity. just stop sending them deltas.

The purpose is to have redundant hosting of content, even encrypted, so public facing visitors can host the back end encrypted content. It is about keeping the changes done within the ZeroNet framework completely and not giving out passwords on other platforms or in emails, but it being automatic access, which dynamically is assigned. This will (theoretically) enable advanced measures like having front end content editable by a few backend admins, no matter where they are, and without additional compromises or points of failure.

from zeronet.

anoadragon453 avatar anoadragon453 commented on April 28, 2024

This will (theoretically) enable advanced measures like having front end content editable by a few backend admins, no matter where they are, and without additional compromises or points of failure.

This is already a thing in ZeroNet, where content updates require signatures by pre-approved keys. Encryption doesn't help here. If only a select few people can decrypt some content, you can't then go and serve that content to random visitors and expect it to be useful other than just for storage.

The purpose is to have redundant hosting of content, even encrypted, so public facing visitors can host the back end encrypted content.

This is a good point, and may be required if sharing private content with a select few people ends up in the hosting not being reliable. But to solve this we need to figure out a cryptographic method of encrypting the content efficiently to multiple participants.

We could just crib off of existing solutions here, such as matrix's olm, based on Signal's double-ratchet algorithm. This works by creating a shared secret that's sent through encrypted channels to participants, and is then used to encrypt a number of messages (in this case, diffs to ZeroNet content).

When someone leaves or joins the conversation, the key is ratcheted forward to a new one, such that the person who left isn't able to decrypt new content.

As long as people aren't joining and leaving constantly, this is actually pretty efficient.

from zeronet.

rujash7 avatar rujash7 commented on April 28, 2024

This is already a thing in ZeroNet, where content updates require signatures by pre-approved keys. Encryption doesn't help here. If only a select few people can decrypt some content, you can't then go and serve that content to random visitors and expect it to be useful other than just for storage.

The main thing I am imagining is a tiered access structure with this, so if we discover a bad actor with access, the site owner can change the keys behind a deeper wall of access, and then distribute that changed access code to non-bad-actors via zeronet email, or just a list of actors. Theoretically seemed possible.

Since now it seems access is set permanently with a code, and if it's leaked, the site is completely compromised.

But yeah, that would require the encrypted content somehow affecting the non-encrypted. What you've outlined may suffice. The goal was to implement some moderation means for site admins, that isn't permanent - ontop of private access areas.

If access to mid-tier (public/admin/owner) could be dynamic, then the moment a person (admin) is found out to be compromised, they would likely unwittingly download the site content that excludes their access on next visit, providing more security against leaks.

All this without losing a public audience (your redundancy) or leaving zeronet.

from zeronet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.