GithubHelp home page GithubHelp logo

zkat / cacache Goto Github PK

View Code? Open in Web Editor NEW
240.0 6.0 37.0 1.27 MB

πŸ’©πŸ’΅ but for your data. If you've got the hash, we've got the cache β„’ (moved)

Home Page: https://github.com/npm/cacache

License: Other

JavaScript 100.00%
nodejs cache content-addressed npm filesystem-cache

cacache's Introduction

cacache npm version license Travis AppVeyor Coverage Status

NOTE: This repository has moved to https://github.com/npm/cacache

cacache is a Node.js library for managing local key and content address caches. It's really fast, really good at concurrency, and it will never give you corrupted data, even if cache files get corrupted or manipulated.

It was originally written to be used as npm's local cache, but can just as easily be used on its own.

Translations: espaΓ±ol

Install

$ npm install --save cacache

Table of Contents

Example

const cacache = require('cacache/en')
const fs = require('fs')

const tarball = '/path/to/mytar.tgz'
const cachePath = '/tmp/my-toy-cache'
const key = 'my-unique-key-1234'

// Cache it! Use `cachePath` as the root of the content cache
cacache.put(cachePath, key, '10293801983029384').then(integrity => {
  console.log(`Saved content to ${cachePath}.`)
})

const destination = '/tmp/mytar.tgz'

// Copy the contents out of the cache and into their destination!
// But this time, use stream instead!
cacache.get.stream(
  cachePath, key
).pipe(
  fs.createWriteStream(destination)
).on('finish', () => {
  console.log('done extracting!')
})

// The same thing, but skip the key index.
cacache.get.byDigest(cachePath, integrityHash).then(data => {
  fs.writeFile(destination, data, err => {
    console.log('tarball data fetched based on its sha512sum and written out!')
  })
})

Features

  • Extraction by key or by content address (shasum, etc)
  • Subresource Integrity web standard support
  • Multi-hash support - safely host sha1, sha512, etc, in a single cache
  • Automatic content deduplication
  • Fault tolerance (immune to corruption, partial writes, process races, etc)
  • Consistency guarantees on read and write (full data verification)
  • Lockless, high-concurrency cache access
  • Streaming support
  • Promise support
  • Pretty darn fast -- sub-millisecond reads and writes including verification
  • Arbitrary metadata storage
  • Garbage collection and additional offline verification
  • Thorough test coverage
  • There's probably a bloom filter in there somewhere. Those are cool, right? πŸ€”

Contributing

The cacache team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The Contributor Guide has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear.

All participants and maintainers in this project are expected to follow Code of Conduct, and just generally be excellent to each other.

Please refer to the Changelog for project history details, too.

Happy hacking!

API

Using localized APIs

cacache includes a complete API in English, with the same features as other translations. To use the English API as documented in this README, use require('cacache/en'). This is also currently the default if you do require('cacache'), but may change in the future.

cacache also supports other languages! You can find the list of currently supported ones by looking in ./locales in the source directory. You can use the API in that language with require('cacache/<lang>').

Want to add support for a new language? Please go ahead! You should be able to copy ./locales/en.js and ./locales/en.json and fill them in. Translating the README.md is a bit more work, but also appreciated if you get around to it. πŸ‘πŸΌ

> cacache.ls(cache) -> Promise<Object>

Lists info for all entries currently in the cache as a single large object. Each entry in the object will be keyed by the unique index key, with corresponding get.info objects as the values.

Example
cacache.ls(cachePath).then(console.log)
// Output
{
  'my-thing': {
    key: 'my-thing',
    integrity: 'sha512-BaSe64/EnCoDED+HAsh=='
    path: '.testcache/content/deadbeef', // joined with `cachePath`
    time: 12345698490,
    size: 4023948,
    metadata: {
      name: 'blah',
      version: '1.2.3',
      description: 'this was once a package but now it is my-thing'
    }
  },
  'other-thing': {
    key: 'other-thing',
    integrity: 'sha1-ANothER+hasH=',
    path: '.testcache/content/bada55',
    time: 11992309289,
    size: 111112
  }
}

> cacache.ls.stream(cache) -> Readable

Lists info for all entries currently in the cache as a single large object.

This works just like ls, except get.info entries are returned as 'data' events on the returned stream.

Example
cacache.ls.stream(cachePath).on('data', console.log)
// Output
{
  key: 'my-thing',
  integrity: 'sha512-BaSe64HaSh',
  path: '.testcache/content/deadbeef', // joined with `cachePath`
  time: 12345698490,
  size: 13423,
  metadata: {
    name: 'blah',
    version: '1.2.3',
    description: 'this was once a package but now it is my-thing'
  }
}

{
  key: 'other-thing',
  integrity: 'whirlpool-WoWSoMuchSupport',
  path: '.testcache/content/bada55',
  time: 11992309289,
  size: 498023984029
}

{
  ...
}

> cacache.get(cache, key, [opts]) -> Promise({data, metadata, integrity})

Returns an object with the cached data, digest, and metadata identified by key. The data property of this object will be a Buffer instance that presumably holds some data that means something to you. I'm sure you know what to do with it! cacache just won't care.

integrity is a Subresource Integrity string. That is, a string that can be used to verify data, which looks like <hash-algorithm>-<base64-integrity-hash>.

If there is no content identified by key, or if the locally-stored data does not pass the validity checksum, the promise will be rejected.

A sub-function, get.byDigest may be used for identical behavior, except lookup will happen by integrity hash, bypassing the index entirely. This version of the function only returns data itself, without any wrapper.

Note

This function loads the entire cache entry into memory before returning it. If you're dealing with Very Large data, consider using get.stream instead.

Example
// Look up by key
cache.get(cachePath, 'my-thing').then(console.log)
// Output:
{
  metadata: {
    thingName: 'my'
  },
  integrity: 'sha512-BaSe64HaSh',
  data: Buffer#<deadbeef>,
  size: 9320
}

// Look up by digest
cache.get.byDigest(cachePath, 'sha512-BaSe64HaSh').then(console.log)
// Output:
Buffer#<deadbeef>

> cacache.get.stream(cache, key, [opts]) -> Readable

Returns a Readable Stream of the cached data identified by key.

If there is no content identified by key, or if the locally-stored data does not pass the validity checksum, an error will be emitted.

metadata and integrity events will be emitted before the stream closes, if you need to collect that extra data about the cached entry.

A sub-function, get.stream.byDigest may be used for identical behavior, except lookup will happen by integrity hash, bypassing the index entirely. This version does not emit the metadata and integrity events at all.

Example
// Look up by key
cache.get.stream(
  cachePath, 'my-thing'
).on('metadata', metadata => {
  console.log('metadata:', metadata)
}).on('integrity', integrity => {
  console.log('integrity:', integrity)
}).pipe(
  fs.createWriteStream('./x.tgz')
)
// Outputs:
metadata: { ... }
integrity: 'sha512-SoMeDIGest+64=='

// Look up by digest
cache.get.stream.byDigest(
  cachePath, 'sha512-SoMeDIGest+64=='
).pipe(
  fs.createWriteStream('./x.tgz')
)

> cacache.get.info(cache, key) -> Promise

Looks up key in the cache index, returning information about the entry if one exists.

Fields
  • key - Key the entry was looked up under. Matches the key argument.
  • integrity - Subresource Integrity hash for the content this entry refers to.
  • path - Filesystem path where content is stored, joined with cache argument.
  • time - Timestamp the entry was first added on.
  • metadata - User-assigned metadata associated with the entry/content.
Example
cacache.get.info(cachePath, 'my-thing').then(console.log)

// Output
{
  key: 'my-thing',
  integrity: 'sha256-MUSTVERIFY+ALL/THINGS=='
  path: '.testcache/content/deadbeef',
  time: 12345698490,
  size: 849234,
  metadata: {
    name: 'blah',
    version: '1.2.3',
    description: 'this was once a package but now it is my-thing'
  }
}

> cacache.get.hasContent(cache, integrity) -> Promise

Looks up a Subresource Integrity hash in the cache. If content exists for this integrity, it will return an object, with the specific single integrity hash that was found in sri key, and the size of the found content as size. If no content exists for this integrity, it will return false.

Example
cacache.get.hasContent(cachePath, 'sha256-MUSTVERIFY+ALL/THINGS==').then(console.log)

// Output
{
  sri: {
    source: 'sha256-MUSTVERIFY+ALL/THINGS==',
    algorithm: 'sha256',
    digest: 'MUSTVERIFY+ALL/THINGS==',
    options: []
  },
  size: 9001
}

cacache.get.hasContent(cachePath, 'sha521-NOT+IN/CACHE==').then(console.log)

// Output
false

> cacache.put(cache, key, data, [opts]) -> Promise

Inserts data passed to it into the cache. The returned Promise resolves with a digest (generated according to opts.algorithms) after the cache entry has been successfully written.

Example
fetch(
  'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).then(data => {
  return cacache.put(cachePath, 'registry.npmjs.org|[email protected]', data)
}).then(integrity => {
  console.log('integrity hash is', integrity)
})

> cacache.put.stream(cache, key, [opts]) -> Writable

Returns a Writable Stream that inserts data written to it into the cache. Emits an integrity event with the digest of written contents when it succeeds.

Example
request.get(
  'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).pipe(
  cacache.put.stream(
    cachePath, 'registry.npmjs.org|[email protected]'
  ).on('integrity', d => console.log(`integrity digest is ${d}`))
)

> cacache.put options

cacache.put functions have a number of options in common.

opts.metadata

Arbitrary metadata to be attached to the inserted key.

opts.size

If provided, the data stream will be verified to check that enough data was passed through. If there's more or less data than expected, insertion will fail with an EBADSIZE error.

opts.integrity

If present, the pre-calculated digest for the inserted content. If this option if provided and does not match the post-insertion digest, insertion will fail with an EINTEGRITY error.

algorithms has no effect if this option is present.

opts.algorithms

Default: ['sha512']

Hashing algorithms to use when calculating the subresource integrity digest for inserted data. Can use any algorithm listed in crypto.getHashes() or 'omakase'/'γŠδ»»γ›γ—γΎγ™' to pick a random hash algorithm on each insertion. You may also use any anagram of 'modnar' to use this feature.

Currently only supports one algorithm at a time (i.e., an array length of exactly 1). Has no effect if opts.integrity is present.

opts.uid/opts.gid

If provided, cacache will do its best to make sure any new files added to the cache use this particular uid/gid combination. This can be used, for example, to drop permissions when someone uses sudo, but cacache makes no assumptions about your needs here.

opts.memoize

Default: null

If provided, cacache will memoize the given cache insertion in memory, bypassing any filesystem checks for that key or digest in future cache fetches. Nothing will be written to the in-memory cache unless this option is explicitly truthy.

If opts.memoize is an object or a Map-like (that is, an object with get and set methods), it will be written to instead of the global memoization cache.

Reading from disk data can be forced by explicitly passing memoize: false to the reader functions, but their default will be to read from memory.

> cacache.rm.all(cache) -> Promise

Clears the entire cache. Mainly by blowing away the cache directory itself.

Example
cacache.rm.all(cachePath).then(() => {
  console.log('THE APOCALYPSE IS UPON US 😱')
})

> cacache.rm.entry(cache, key) -> Promise

Alias: cacache.rm

Removes the index entry for key. Content will still be accessible if requested directly by content address (get.stream.byDigest).

To remove the content itself (which might still be used by other entries), use rm.content. Or, to safely vacuum any unused content, use verify.

Example
cacache.rm.entry(cachePath, 'my-thing').then(() => {
  console.log('I did not like it anyway')
})

> cacache.rm.content(cache, integrity) -> Promise

Removes the content identified by integrity. Any index entries referring to it will not be usable again until the content is re-added to the cache with an identical digest.

Example
cacache.rm.content(cachePath, 'sha512-SoMeDIGest/IN+BaSE64==').then(() => {
  console.log('data for my-thing is gone!')
})

> cacache.setLocale(locale)

Configure the language/locale used for messages and errors coming from cacache. The list of available locales is in the ./locales directory in the project root.

Interested in contributing more languages! Submit a PR!

> cacache.clearMemoized()

Completely resets the in-memory entry cache.

> tmp.mkdir(cache, opts) -> Promise<Path>

Returns a unique temporary directory inside the cache's tmp dir. This directory will use the same safe user assignment that all the other stuff use.

Once the directory is made, it's the user's responsibility that all files within are made according to the same opts.gid/opts.uid settings that would be passed in. If not, you can ask cacache to do it for you by calling tmp.fix(), which will fix all tmp directory permissions.

If you want automatic cleanup of this directory, use tmp.withTmp()

Example
cacache.tmp.mkdir(cache).then(dir => {
  fs.writeFile(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
})

> tmp.withTmp(cache, opts, cb) -> Promise

Creates a temporary directory with tmp.mkdir() and calls cb with it. The created temporary directory will be removed when the return value of cb() resolves -- that is, if you return a Promise from cb(), the tmp directory will be automatically deleted once that promise completes.

The same caveats apply when it comes to managing permissions for the tmp dir's contents.

Example
cacache.tmp.withTmp(cache, dir => {
  return fs.writeFileAsync(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
}).then(() => {
  // `dir` no longer exists
})

Subresource Integrity Digests

For content verification and addressing, cacache uses strings following the Subresource Integrity spec. That is, any time cacache expects an integrity argument or option, it should be in the format <hashAlgorithm>-<base64-hash>.

One deviation from the current spec is that cacache will support any hash algorithms supported by the underlying Node.js process. You can use crypto.getHashes() to see which ones you can use.

Generating Digests Yourself

If you have an existing content shasum, they are generally formatted as a hexadecimal string (that is, a sha1 would look like: 5f5513f8822fdbe5145af33b64d8d970dcf95c6e). In order to be compatible with cacache, you'll need to convert this to an equivalent subresource integrity string. For this example, the corresponding hash would be: sha1-X1UT+IIv2+UUWvM7ZNjZcNz5XG4=.

If you want to generate an integrity string yourself for existing data, you can use something like this:

const crypto = require('crypto')
const hashAlgorithm = 'sha512'
const data = 'foobarbaz'

const integrity = (
  hashAlgorithm +
  '-' +
  crypto.createHash(hashAlgorithm).update(data).digest('base64')
)

You can also use ssri to have a richer set of functionality around SRI strings, including generation, parsing, and translating from existing hex-formatted strings.

> cacache.verify(cache, opts) -> Promise

Checks out and fixes up your cache:

  • Cleans up corrupted or invalid index entries.
  • Custom entry filtering options.
  • Garbage collects any content entries not referenced by the index.
  • Checks integrity for all content entries and removes invalid content.
  • Fixes cache ownership.
  • Removes the tmp directory in the cache and all its contents.

When it's done, it'll return an object with various stats about the verification process, including amount of storage reclaimed, number of valid entries, number of entries removed, etc.

Options
  • opts.uid - uid to assign to cache and its contents
  • opts.gid - gid to assign to cache and its contents
  • opts.filter - receives a formatted entry. Return false to remove it. Note: might be called more than once on the same entry.
Example
echo somegarbage >> $CACHEPATH/content/deadbeef
cacache.verify(cachePath).then(stats => {
  // deadbeef collected, because of invalid checksum.
  console.log('cache is much nicer now! stats:', stats)
})

> cacache.verify.lastRun(cache) -> Promise

Returns a Date representing the last time cacache.verify was run on cache.

Example
cacache.verify(cachePath).then(() => {
  cacache.verify.lastRun(cachePath).then(lastTime => {
    console.log('cacache.verify was last called on' + lastTime)
  })
})

cacache's People

Contributors

chrisdickinson avatar greenkeeper[bot] avatar hdgarrood avatar iarna avatar jbcpollak avatar jfmartinez avatar larsgw avatar rmg avatar zkat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

cacache's Issues

Default to sha512

not long after the default was changed to sha1, there was a fun little security event. So just default to sha512 from now on. lol.

Go back to simple hashes for index keys

index keys, in practice, tend to be pretty long (so they're hard to truncate without growing buckets massively), and it's not actually all that useful to have them be legible.

Switch back to a shortened sha1 of the keys, probably down to 4 or 5 chars. That should give relatively few conflicts without making the index blow up in size.

An in-range update of weallcontribute is breaking the build 🚨

Version 1.0.8 of weallcontribute just got published.

Branch Build failing 🚨
Dependency weallcontribute
Current Version 1.0.7
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

As weallcontribute is β€œonly” a devDependency of this project it might not break production or downstream projects, but β€œonly” your build or test tools – preventing new deploys or publishes.

I recommend you give this issue a high priority. I’m sure you can resolve this πŸ’ͺ


Status Details
  • ❌ continuous-integration/travis-ci/push The Travis CI build is in progress Details

  • ❌ coverage/coveralls Coverage pending from Coveralls.io Details

  • ❌ continuous-integration/appveyor/branch AppVeyor build failed Details

Commits

The new version differs by 2 commits .

  • 40138df 1.0.8
  • a3e5e4d docs(contributing): move toc above the fold, move intro below. add emoji.

See the full diff.

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

use lru-cache for memoization

https://www.npmjs.com/package/lru-cache is probably a better alternative to the current way the memoization code works, specially if cacache is gonna be used for more long-lasting processes, or if we want more control over memory use in general.

PRs for this should keep the memo API as-is, but accept opts to tweak how much we actually keep in there.

Another point is that it might be nice to get rid of the memoization option altogether and simply do it automatically through this lib. That might simplify a ton of stuff tbqh. :\

Use conventional-changelog and generate some

I like https://github.com/conventional-changelog/standard-version

We already use the necessary commit format for this to work.

This would essentially replace our currently flow with npm version <M/m/p> -> new release and replace it with npm run release, which can then trigger all the other automated pushes.

If/when we do start using it, weallcontribute should get a PR and adjust accordingly. My favorite part of all this is that this would bring weallcontribute closer to a standard-style project component, but instead of being for code style, it would be for npm dev process and contributions. The more we automate, the more possibility for opinion we take away, the better. If it fits for people, it fits. If not, then they can fork their own. :)

An in-range update of ssri is breaking the build 🚨

Version 4.1.2 of ssri just got published.

Branch Build failing 🚨
Dependency ssri
Current Version 4.1.1
Type dependency

This version is covered by your current version range and after updating it in your project the build failed.

As ssri is a direct dependency of this project this is very likely breaking your project right now. If other packages depend on you it’s very likely also breaking them.
I recommend you give this issue a very high priority. I’m sure you can resolve this πŸ’ͺ


Status Details
  • ❌ continuous-integration/travis-ci/push The Travis CI build is in progress Details

  • ❌ continuous-integration/appveyor/branch AppVeyor build cancelled Details

Commits

The new version differs by 2 commits .

  • e0f1a5d chore(release): 4.1.2
  • b1c4805 fix(stream): _flush can be called multiple times. use on("end")

See the full diff.

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

write user guide

The guide in the readme is currently empty. Probably would be good to fill it out. ;)

An in-range update of tap is breaking the build 🚨

Version 10.1.2 of tap just got published.

Branch Build failing 🚨
Dependency tap
Current Version 10.1.1
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

As tap is β€œonly” a devDependency of this project it might not break production or downstream projects, but β€œonly” your build or test tools – preventing new deploys or publishes.

I recommend you give this issue a high priority. I’m sure you can resolve this πŸ’ͺ


Status Details
  • βœ… continuous-integration/travis-ci/push The Travis CI build passed Details

  • βœ… coverage/coveralls First build on greenkeeper/tap-10.1.2 at 94.215% Details

  • ❌ continuous-integration/appveyor/branch AppVeyor build failed Details

Commits

The new version differs by 3 commits .

  • bf6f51e v10.1.2
  • bdb1d80 Inherit bailout results from parent test
  • 0aa4202 Support old nodes that did not respect process.exitCode

See the full diff.

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

Should content files be read-only?

A quick skim through my ~/.npm/_cacache shows that the verified content files are all -rw-r--r--.

It looks like the contents all get written to files in a temp space and then get "moved" to the content location. Should the final destination be made completely read-only (-r--r--r--) since there's no reason to ever modify their contents? The ability to delete the files is based on the w bit of the containing folder, so it shouldn't cause problems there.

Fix dangling file handle issues with put-stream

Whenever there's an error in put-stream, the tmp file handle stays open. Usually, this is a "who cares" thing, except:

  1. we're going to possibly put pressure on open fd limits depending on the system?
  2. Windows can't even

Let's fix this. For the Windows. ❀️

Related to #34, which is what actually exposed this bug, and #35 which should hopefully fix this bug.

collate content files into different dirs

If you look in your .git/objects directory, you'll notice that it's a bunch of 2-character directories, inside of which the actual object files are stored. This is important because many filesystems start seeing significant performance issues when there's too many files in a single directory.

cacache should do the same thing, though it would technically be a breaking change (since you can't reuse previous caches, and I'm trying to be pedantic).

This should be done for both the index and the content directory, and the biggest load is probably adapting all the tests to read the right files.

NOTE

While I consider this a starter issue, it does involve touching a lot of the code and rewriting parts of the test suite that mock the contents directory. The diff is likely to be big. This is fine. If you wanna pick this up and need some help, go ahead and @ me :) Otherwise, I'll do it myself later.

Support storing content as gzipped tarballs

Give users the option of, when they store something, to reduce storage use and potentially speed up cache writes.

I believe if the content directories themselves are written as files, we can just do an fs.stat to see if it's a directory or file, and assume that files are compressed tarballs.

There should also be an option to compress/decompress individual entries (which can be used elsewhere to compress/decompress the entire cache, as desired).

An in-range update of through2 is breaking the build 🚨

Version 2.0.2 of through2 just got published.

Branch Build failing 🚨
Dependency through2
Current Version 2.0.1
Type dependency

This version is covered by your current version range and after updating it in your project the build failed.

As through2 is a direct dependency of this project this is very likely breaking your project right now. If other packages depend on you it’s very likely also breaking them.
I recommend you give this issue a very high priority. I’m sure you can resolve this πŸ’ͺ


Status Details
  • ❌ continuous-integration/appveyor/branch Waiting for AppVeyor build to complete Details

  • ❌ continuous-integration/travis-ci/push The Travis CI build failed Details

Commits

The new version differs by 4 commits .

See the full diff.

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

hasContent should return the file size

Since #49 landed, index entries store content size data. This data is critically important when it comes to quickly making decisions about whether to stream or do bulk operations on content.

The missing piece now is that content-addressed interactions don't have a way to figure out the size of the data they're about to process.

So, hasContent, which is usually the entry point to this, should return size information for the file if it happens to find it, along with the SRI. The change required for this is mostly pretty small: lib/content/read.js is the one with hasContent, and that function already calls out to fs.lstat. All that's left is to thread the data through (probably wrap the sri return value so you get {sri, stat}. hasContent itself, when it succeeds, should return an object that looks like {sri, size}. Otherwise, false.

Since I expect it to be a relatively small patch, I've marked this as a good starter issue. Also one that will have some seriously awesome performance impact in the libraries using it! Very soon, and very tangibly! make-fetch-happen and pacote already have places to drop this in for speeeeedz.

An in-range update of lru-cache is breaking the build 🚨

Version 4.1.0 of lru-cache just got published.

Branch Build failing 🚨
Dependency lru-cache
Current Version 4.0.2
Type dependency

This version is covered by your current version range and after updating it in your project the build failed.

lru-cache is a direct dependency of this project this is very likely breaking your project right now. If other packages depend on you it’s very likely also breaking them.
I recommend you give this issue a very high priority. I’m sure you can resolve this πŸ’ͺ

Status Details
  • ❌ continuous-integration/appveyor/branch Waiting for AppVeyor build to complete Details
  • ❌ continuous-integration/travis-ci/push The Travis CI build is in progress Details
  • ❌ coverage/coveralls First build on greenkeeper/lru-cache-4.1.0 at 89.883% Details

Commits

The new version differs by 9 commits.

See the full diff

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

Object interface

The toplevel cacache should be a class that can be instantiated with its own internal state. All current functions should then become static functions of this class, and the class should wrap them into methods that simply move the cache path into an object-local property.

This will make cacache dependency-injectable, and possible make it easier to manage opts for users. Once #75 gets merged, it also means LRU settings can be managed per-instance, rather than relying on a single module's internal state shared across all users.

The new object interface will not be a breaking change: all current functionality must continue working as-is. Additionally, static functions should accept Cacache instances in place of a plain path, opts should be defaulted off that instance, and the cachePath automatically extracted. Everything else should work the same.

Memoization - Expected Cache Entry to Drop Past the Max Age

Hello!

I decided to give the in-memory cache a try, it does keep the entries in memory but since it uses the lru-cache module and setting the maxAge to 3 minutes I expected the content to "fall out". Here are the steps (sample code is below)

  1. Added an entry to the cache
  2. Deleted the cache directory
  3. Waited 5 minutes
  4. Expect the entry to not be found since LRU cache drops the entry after maxAge.
const cacache = require('cacache');
const rimraf = require('rimraf');

const key = 'someKey';
const cachePath = './cache';

const opts = {
  memoize: true,
};
cacache.put(cachePath, key, 'hello world!', opts)
.then((integrity) => {
  console.log(integrity);
  rimraf(cachePath, () => {});
});

// Wait five minutes
setTimeout(() => {
  cacache.get(cachePath, key, opts)
.then(console.log);
}, 5 * 60 * 1000);

Details:

  • Node v4.7.2
  • npm v2.15
  • cacache v7.1.0

lockfile crash on high concurrency

So it turns out the current usage of lockfile implodes when you have a bunch of things trying to grab a lockfile at the same time (even in a relatively short amount of time). It starts spitting out a bunch of EEXIST errors like:

{ Error: EEXIST: file already exists, open '/path/to.lock'
  errno: -17,
  code: 'EEXIST',
  syscall: 'open',
  path: '/path/to.lock' }

Passing config values such as https://github.com/npm/npm/blob/latest/lib/utils/locker.js#L27-L29 (or at least some reasonable defaults) made these errors completely go away when I just tossed the current npm defaults in.

Tests for this one should be fun 😬 . Any tests that are written for this must first fail both locally and on CI (appveyor and travis alike).

An in-range update of rimraf is breaking the build 🚨

Version 2.6.1 of rimraf just got published.

Branch Build failing 🚨
Dependency rimraf
Current Version 2.6.0
Type dependency

This version is covered by your current version range and after updating it in your project the build failed.

As rimraf is a direct dependency of this project this is very likely breaking your project right now. If other packages depend on you it’s very likely also breaking them.
I recommend you give this issue a very high priority. I’m sure you can resolve this πŸ’ͺ


Status Details
  • βœ… continuous-integration/travis-ci/push The Travis CI build passed Details

  • βœ… coverage/coveralls First build on greenkeeper/rimraf-2.6.1 at 93.269% Details

  • ❌ continuous-integration/appveyor/branch AppVeyor build failed Details

Commits

The new version differs by 2 commits .

  • d84fe2c v2.6.1
  • e8cd685 only run rmdirSync 'retries' times when it throws

See the full diff.

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

add coveralls + badge

cacache already has coverage output from nyc. Just need to hook up travis to actually report the coveralls stuff, and a badge to the README to show it.

Make testDir.reset async

Over in pacote, zkat/pacote#13 made testDir.reset() operate asynchronously, which should work around random Windows-related failures. Port that over here, since it should play much nicer with the new tap.

write garbage collector

The GC should do a few of things:

  • delete content entries that aren't referenced at all by the index
  • clean up extra, shadowed entries in index buckets (note: be mindful that deletes involve just an addition rn)
  • remove invalid entries from index buckets
  • remove any content entries that fail their checksum
  • write docs
  • Update code to be compatible with new index and content directory formats
  • Don't crash on unfixable perm problems, but notify about them (users should probably re-run the verification with elevated perms and pass in a uid/gid)
  • write tests

Stretch goals

Please create enhancement issues for each of these before closing this issue, unless they get implemented in the same PR.

  • safely acquire locks when manipulating things
  • include interface for users with more domain knowledge of the cache keys to do custom marking (for example, if both registry:[email protected] and registry:[email protected] are present, a custom marker might decide to only mark the latter and bet that the former will rarely be installed)
  • delete entries older than <date> altogether. This might just be something to be done through the user-provided marker.

Use bulk operations for bulk reads

Writes can probably wait, but tbh if someone is asking for a bulk read, just use fs.readFile, run the necessary checksum, and we're done. Doing the stream dance seems really pointless here.

An in-range update of standard is breaking the build 🚨

Version 8.6.0 of standard just got published.

Branch Build failing 🚨
Dependency standard
Current Version 8.5.0
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

As standard is β€œonly” a devDependency of this project it might not break production or downstream projects, but β€œonly” your build or test tools – preventing new deploys or publishes.

I recommend you give this issue a high priority. I’m sure you can resolve this πŸ’ͺ


Status Details
  • ❌ continuous-integration/appveyor/branch Waiting for AppVeyor build to complete Details

  • ❌ continuous-integration/travis-ci/push The Travis CI build failed Details

Commits

The new version differs by 10 commits .

See the full diff.

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

Correct spelling mistake in CONTRIBUTING.md

In CONTRIBUTING.md the second paragraph reads

Please make sure to read the relevant section before making your contribution! It will make it a lot easier for us maintainers to make the most of it and smooth out the experience fo all involved. πŸ’š

"fo all involved" should be changed to "of all involved"!

Check out the section on Contributing Documentation to discover how to make this contributing!

🌞

Auto-delete corrupted content entries

If we get an EINTEGRITY error while reading stuff from the cache, the cached data should be unlinked. There's no need to remove associated index entries, though.

This should be implementable entirely within lib/content/read.js, since that's the single interface for ever reading data out of the cache. Things should be removable before any errors are actually emitted through either the promise or stream interfaces.

That said, this "feature" might be a bad idea -- in exchange for automatically clearing out bad caches, we might be giving up the ability to debug integrity issues. I think this is ok, though, because cacache errs on the side of throwing things away, and without this change, users are left with the responsibility of figuring out and fixing cache issues. They really shouldn't have to.

cache versioning

The on-disk cache format should be versioned, so cacache is able to upgrade on-disk data format in the future. Not much is needed right now besides a good way to just put a version number in the filesystem.

Use bulk operations for bulk content write

If a user is doing a bulk write, just do a single fs.writeFile with a few other necessary backflips. The biggest downside is this might duplicate code, and we should get benchmarks for this, but it might be much much much faster than the huge promise/stream frankenstein.

An in-range update of weallbehave is breaking the build 🚨

Version 1.2.0 of weallbehave just got published.

Branch Build failing 🚨
Dependency weallbehave
Current Version 1.0.3
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

As weallbehave is β€œonly” a devDependency of this project it might not break production or downstream projects, but β€œonly” your build or test tools – preventing new deploys or publishes.

I recommend you give this issue a high priority. I’m sure you can resolve this πŸ’ͺ

Status Details
  • ❌ continuous-integration/appveyor/branch Waiting for AppVeyor build to complete Details,- βœ… continuous-integration/travis-ci/push The Travis CI build passed Details,- ❌ coverage/coveralls First build on greenkeeper/weallbehave-1.2.0 at 89.926% Details

Commits

The new version differs by 4 commits0.

false

See the full diff

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

An in-range update of ssri is breaking the build 🚨

Version 4.1.5 of ssri just got published.

Branch Build failing 🚨
Dependency ssri
Current Version 4.1.4
Type dependency

This version is covered by your current version range and after updating it in your project the build failed.

ssri is a direct dependency of this project this is very likely breaking your project right now. If other packages depend on you it’s very likely also breaking them.
I recommend you give this issue a very high priority. I’m sure you can resolve this πŸ’ͺ

Status Details
  • ❌ continuous-integration/appveyor/branch Waiting for AppVeyor build to complete Details
  • βœ… continuous-integration/travis-ci/push The Travis CI build passed Details
  • ❌ coverage/coveralls First build on greenkeeper/ssri-4.1.5 at 89.883% Details

Commits

The new version differs by 2 commits.

  • 75be125 chore(release): 4.1.5
  • fb1293e fix(integrityStream): stop crashing if opts.algorithms and opts.integrity have an algo mismatch

See the full diff

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

check fd race issues

On Windows, behavior gets very strange when, say, an antivirus decides to open a file after it's been freshly written. Only one process can have an fd open on win32, so it's important to double-check the design (and test!) to make sure that isn't a problem with cacache proper. AppVeyor should be able to CI that well enough.

Suggestion: ignore junk files e.g. .DS_Store while scanning cache directories

Environment

$ sw_vers
ProductName:	Mac OS X
ProductVersion:	10.12.4
BuildVersion:	16E195

$ node -v
v7.10.0

$ npm ls cacache
...
└── [email protected] 

Problem

I'm trying to use cacache in my module. To verify it works correctly, I often open the cache directory by macOS Finder.app, and it creates .DS_Store file in every directories I've opened.

Those .DS_Store files causes ENOTDIRerrors while recursively calling readdir to the cache directory:

ENOTDIR: not a directory, scandir '/Users/shinnn/.npm/_cacache/index-v5/0b/.DS_Store'

Solution

I think we have 3 options to solve this problem:

  • i. Test the file path with junk module before calling readdir to it.
  • ii. fs.stat it and check if it's actually a directory before calling readdir to it.
    • This solution will lead considerable performance regression though.
  • iii. No need to solve this problem. Don't directly open the cache directories with the file explorer apps.

I prefer the first one, and can create a pull request if you prefer it too.

Thanks.

Add contribution documentation

zkat/pacote#29 adds LICENSE, CONTRIBUTING.md, and CODE_OF_CONDUCT.md to that project -- cacache should have similar docs added. That is most likely a matter of scanning through the docs and making sure any references to pacote are replaced with cacache. Eyeballing things to make sure nothing snuck through is also πŸ’―.

support node-tar

tar-fs is significantly faster than node-tar in most cases, and avoids the weirdness of fstream that has caused much trouble in the past.

That said, node-tar has been the standard packing/unpacking library for npm and includes a lot of work to make damn sure those tarballs are widely compatible. It might not be worth the risk.

In any case, it would be nice to abstract out where tar-fs gets used so other libraries can be swapped out in the future.

write tests: entry-index

get.info has some tests attached to it, but that only covers index.find. Need to have tests for more stuff, specially for index.insert, which is a pretty critical part.

  • index.find
  • index.insert
  • index.ls

Protect against hash conflicts

I'm starting to think that cacache should start storing secondary integrity hashes in the index. Direct content address reads can still potentially yield bad data, but if you provide a key, cacache will interpret checksum conflicts as regular checksum failures (by using the stronger algorithm for data verification), and then it's up to the user to figure out what to do with it.

In the case of, say, pacote, what would happen on a tarball conflict is simply treating the conflict as corruption and then it would re-fetch the data.

idk if this is worth the effort -- if you're using cacache with weak checksums (it defaults to sha512!), then you're basically asking for trouble, but the reality is the npm registry still relies on sha1, and alternative registries will continue to do so further into the future.

fix cacache.rm

index.insert doesn't support null as a digest, and it basically requires a hashAlgorithm now. Maybe the hash should be an extra parameter there from now on, sigh.

An in-range update of ssri is breaking the build 🚨

Version 4.1.3 of ssri just got published.

Branch Build failing 🚨
Dependency ssri
Current Version 4.1.2
Type dependency

This version is covered by your current version range and after updating it in your project the build failed.

ssri is a direct dependency of this project this is very likely breaking your project right now. If other packages depend on you it’s very likely also breaking them.
I recommend you give this issue a very high priority. I’m sure you can resolve this πŸ’ͺ

Status Details
  • ❌ continuous-integration/appveyor/branch Waiting for AppVeyor build to complete Details
  • βœ… continuous-integration/travis-ci/push The Travis CI build passed Details
  • ❌ coverage/coveralls First build on greenkeeper/ssri-4.1.3 at 89.838% Details

Commits

The new version differs by 6 commits.

  • 76f4a69 chore(release): 4.1.3
  • 9d43a67 docs(coc): updated CODE_OF_CONDUCT.md
  • f625494 deps: update devDeps
  • cc646c3 meta: switch to package-lock.json
  • c2c262b fix(check): handle various bad hash corner cases better
  • d5b0459 meta: added shrinkwrap

See the full diff

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

write tests: basic external API tests

The bulk of the tests are unit tests of the internal bits that do most of the heavy lifting.

We also need tests that do some basic tests of the external versions of the API functions to make sure everything's being called well. Don't need to go into much detail with these because all that's covered by unit tests.

An in-range update of safe-buffer is breaking the build 🚨

Version 5.1.0 of safe-buffer just got published.

Branch Build failing 🚨
Dependency safe-buffer
Current Version 5.0.1
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

As safe-buffer is β€œonly” a devDependency of this project it might not break production or downstream projects, but β€œonly” your build or test tools – preventing new deploys or publishes.

I recommend you give this issue a high priority. I’m sure you can resolve this πŸ’ͺ

Status Details
  • ❌ continuous-integration/appveyor/branch Waiting for AppVeyor build to complete Details
  • βœ… continuous-integration/travis-ci/push The Travis CI build passed Details
  • ❌ coverage/coveralls First build on greenkeeper/safe-buffer-5.1.0 at 89.883% Details

Commits

The new version differs by 10 commits.

See the full diff

Not sure how things should work exactly?

There is a collection of frequently asked questions and of course you may always ask my humans.


Your Greenkeeper Bot 🌴

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.