GithubHelp home page GithubHelp logo

Portability issue in hashes about targets HOT 25 CLOSED

wlandau avatar wlandau commented on August 25, 2024 3
Portability issue in hashes

from targets.

Comments (25)

shikokuchuo avatar shikokuchuo commented on August 25, 2024 1

This helps for in-memory R objects only - these are always implicitly serialized before they can be hashed.

For files, these are hashed as-is - it's a binary blob. So if the same file is moved to a different machine, it will hash the same. The actual serialization method used to generate the files will determine if the same files are produced on different machines. I hope that makes sense.

from targets.

shikokuchuo avatar shikokuchuo commented on August 25, 2024 1

@wlandau in case you were wondering, the PR shikokuchuo/secretbase#5 was ultimately not merged. After further research, I think it would be worth the while to implement another hash, given we'll be breaking anyway. My original motivation for trying XXH64 was to make it non-breaking if you remember.

from targets.

wlandau avatar wlandau commented on August 25, 2024 1

Thanks @shikokuchuo for chiming in here.

@noamross, I believe files moved from one machine to another should still have the same hashes. After #1244 is fixed, a pipeline moved to a different machine should stay up to date. If the contents of e.g. arrow files are dependent on locale, that only comes into play when a target reruns for other reasons. The hash of new file will be different, but that in itself will not have caused the target to rerun in the first place.

from targets.

noamross avatar noamross commented on August 25, 2024 1

@wlandau Thanks. I see that moving files and and metadata together from one machine to another should maintain the state of the pipeline. I'm trying to understand (and if possible, expand) the conditions under which different systems can produce byte-identical R objects and be able to compare or share them (for purposes related to the extensions/experiments I discuss in #1232 (comment)). It seems it will probably require loading and re-hashing an object in memory, but I was curious if it is possible on objects serialized to disk.

from targets.

shikokuchuo avatar shikokuchuo commented on August 25, 2024 1

Reverting back to the issue, shikokuchuo/secretbase#6 is now ready. This implements SipHash (specifically SipHash-1-3 which is a highly-performant variant).

This is a higher quality hash than the non-cryptographic hashes such as 'xxhash', and has some security guarantees whilst still being around the same speed. Technically it is not a cryptographic hash, but a pseudorandom function. Whilst we do not need the security guarantees here (and we are using it with the fixed reference key to maximise performance), the collision resistance is guaranteed vs. something like 'xxhash' where trivial collisions have been found and the quality of the hash has been questioned.

It has also seen wide adoption e.g. as the default hash for newer Python, in Rust and various other languages. I'll invite you to try it out before merging. Feel free to post comments on the PR itself.

from targets.

shikokuchuo avatar shikokuchuo commented on August 25, 2024 1

@wlandau SipHash is now merged into the main branch of secretbase as I'm happy with its quality, and after testing against the reference implementation.

from targets.

wlandau avatar wlandau commented on August 25, 2024 1

Or, as long as targets needs two different hashing packages anyway, I might try xxhashlite as @njtierney suggested in #1212.

from targets.

wlandau avatar wlandau commented on August 25, 2024

A couple minor notes:

  1. It may be time for targets version 2.0.0. According to semantic versioning best practice, a change in major version should be used for breaking changes. It is debatable whether changing hashes actually a breaking change since the code will still work, but there have been so many improvements since 1.0.0 that there is a case to bump the major version.
  2. tar_make(), tar_make_future(), and tar_make_clustermq() should prompt the user with a dialogue explaining the change in hashes and giving them the opportunity to downgrade targets to keep the pipeline up to date.

from targets.

wlandau avatar wlandau commented on August 25, 2024

Also:

  1. Today I confirmed empirically that the inputs supplied to vdigest64(), vdigest64_file(), and vdigest32() always have length 1. In other words, targets always supplies scalars to these functions. That means targets can safely discard the vectorization features in functions like digest_obj64().
  2. As I state in shikokuchuo/secretbase#5 (comment), I will take this opportunity to move targets away from 32-bit hashes.

from targets.

noamross avatar noamross commented on August 25, 2024

I have a couple of questions related to this:

  • Would switching hash implementations help for targets objects, given targets hashes the files as stored on-disk, and the serialized files on-disk have the R version and locale information stored in their headers?
  • Do we know if the header issue applies to qs or any other serialization format, and do those need to be handled differently?

from targets.

wlandau avatar wlandau commented on August 25, 2024

I'm not sure if this would help with local files created by the targets in the pipeline. RDS is the only file format where I would expect headers to be in a place for secretbase to detect. Hash stability seems like an issue important to qs, although I have not tested the outcomes on different machines.

from targets.

noamross avatar noamross commented on August 25, 2024

This makes sense, but an area of ambiguity is RDS targets, not generated by format="file" but regular R objects saved in the local or remote store. If they are generated by two machines, identical as R objects but under different locales, can they be moved between machines and reused because they have the same hash? Can hashing the target as saved on-disk involve skipping the header if we are aware that it is RDS format?

from targets.

shikokuchuo avatar shikokuchuo commented on August 25, 2024

As far as I'm aware that shouldn't be a problem. You should be able to move RDS files to a different machine with different R version and locale and get identical R objects when loaded (assuming both support R serialization version 3 i.e. R >= 3.5).

from targets.

noamross avatar noamross commented on August 25, 2024

Ah, but while the R objects loaded are identical, I believe targets is hashing the RDS files after writing and before reading.

from targets.

shikokuchuo avatar shikokuchuo commented on August 25, 2024

If the same object, identical on 2 different computers with different locales are both saved as RDS files and these files are hashed, then the hashes will be different.

The R language only guarantees that a round trip serialization and unserialization gives you identical objects regardless of where this is done. Unfortunately it does not guarantee hash stability.

from targets.

shikokuchuo avatar shikokuchuo commented on August 25, 2024

I'm not sure if this would help with local files created by the targets in the pipeline. RDS is the only file format where I would expect headers to be in a place for secretbase to detect. Hash stability seems like an issue important to qs, although I have not tested the outcomes on different machines.

@wlandau so you're clear on this, the underlying functions called for files and objects are completely separate - files don't go through the R serialization mechanism, so {secretbase} wouldn't 'detect' anything if hashing a file.

It might be possible to stream unserialize hash an RDS file (if we know it's an RDS file), but I'm not certain and it'd be quite some effort!

from targets.

shikokuchuo avatar shikokuchuo commented on August 25, 2024

@noamross as in my earlier response to @wlandau I think theoretically it would be possible to special case RDS files and pass these to our hash function attached to R unserialize (as opposed to serialize which is what we do for R in-memory objects), and skip the headers in the process. I don't know in practice if there would be any implementation obstacles. But this is a lot of effort and unless this is somehow more broadly useful, I don't think anyone will implement such functionality.

from targets.

shikokuchuo avatar shikokuchuo commented on August 25, 2024

As in if this is of primary concern, then probably save the files as some other invariant format.

from targets.

noamross avatar noamross commented on August 25, 2024

Indeed, I was just thinking I should test how this all ends up working with qs (which I think has built-in xxhash).

from targets.

wlandau avatar wlandau commented on August 25, 2024

Awesome! I will benchmark it later this week.

from targets.

wlandau avatar wlandau commented on August 25, 2024

As we discussed: given its deliberate focus on small data, as well as shikokuchuo/secretbase#8, I am having second thoughts about SipHash for targets.

@shikokuchuo, for secretbase, do you have plans for alternative hashes designed to be fast for large data? Otherwise, to solve this immediate issue, I am considering keeping digest and downgrading to serialization version 2.

from targets.

shikokuchuo avatar shikokuchuo commented on August 25, 2024

I'm still quite jetlagged - I forgot I was meant to be implementing SipHash anyway. From your benchmarking, the total time for file hashing is much lower than for in-memory hashing.

I still think SipHash is a good choice. It is a fast hash and I am not aware of another good quality hash that is faster.
Edit: see shikokuchuo/secretbase#8 (reply in thread)

from targets.

wlandau avatar wlandau commented on August 25, 2024

I forgot I implemented this, but targets calculates hashes on parallel workers if storage = "worker" in tar_option_set() or tar_target(). So that makes me willing to accept a slightly slower hash. And our conversation today helped convince me that performance shouldn't nonlinearly plummet at some unknown threshold beyond 1GB.

Before switching targets to SipHash, I think I just need to read more about the quality of xxhash vs SipHash vs cryptographic hashes. I don't take this decision lightly, and after the switch, I hope to never change the hash in targets ever again.

from targets.

wlandau avatar wlandau commented on August 25, 2024

From a closer look at https://eprint.iacr.org/2012/351.pdf, I see comments like:

However, essentially all standardized MACs and state-of-the-art MACs are
optimized for long messages, not for short messages. Measuring long-message
performance hides the overheads caused by large MAC keys, MAC initialization,
large MAC block sizes, and MAC finalization.

When the authors claim SipHash is "optimized for short inputs", I think they mean it reduces this overhead. It does not necessarily mean the hash is slow for large inputs, which was originally my main concern.

But also:

We comment that SipHash is not meant to be, and (obviously) is not, collision-resistant

Maybe the authors are comparing SipHash with cryptographic hash functions, maybe not. So I am not sure how much better SipHash-1-3 is than xxhash64 with respect to collision resistance. (Although I would not expect xxhash64 to be better).

from targets.

shikokuchuo avatar shikokuchuo commented on August 25, 2024

Your guess is as good as mine as to that throw-away comment. But it is designed to guard against generated collisions which are trivial for the weaker hashes https://www.youtube.com/watch?v=Vdrab3sB7MU That's why it's been adopted as the hash used by Python, Rust etc. By definition if it is a true PRF as claimed (indistinguishable from a random uniform generator), then it should not be any worse at collisions than other hashes.

from targets.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.