GithubHelp home page GithubHelp logo

catbox-memory's Introduction

@hapi/catbox-memory

Memory adaptor for catbox.

catbox-memory is part of the hapi ecosystem and was designed to work seamlessly with the hapi web framework and its other components (but works great on its own or with other frameworks). If you are using a different web framework and find this module useful, check out hapi – they work even better together.

Visit the hapi.dev Developer Portal for tutorials, documentation, and support

Useful resources

catbox-memory's People

Contributors

arb avatar avitale avatar cjihrig avatar devinivy avatar geek avatar gxapplications avatar hpyzik avatar hueniverse avatar jagoda avatar jarrodyellets avatar kpdecker avatar lloydbenson avatar marsup avatar nargonath avatar nwhitmont avatar paulovieira avatar zenlor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

catbox-memory's Issues

Seems not to be cluster-proof

When using this cache in cluster mode under PM2 (for hapi-rate-limit usage, in my case). I clearly see that the cache is not shared between my 4 instances.
Well, because I suppose it's pure memory, its limited to the process. It should be logical, but not told anywhere...

Am I right or there is a problem about this? Specific configuration?
If this cache cannot be cross-instance for cluster mode, I suggest to tell this in the main description of the module. I suppose some other catbox are cross-instance.

Buffers can change while in cache

With allowMixedContent = true, the stored buffers can be modified by writing to the returned Buffer item from a get() call.

I suspect another copy() is required on get() to handle this since there are no read-only buffers.

why peer depend instead of verify in parent

What is the motivation for having npm verify the compatibility between catbox-memory and catbox.

catbox itself could throw if a store interface that's not compatible with catbox 2.x was installed.

What benefit does peer dependencies bring for this use case ?

Performance compared to catbox-redis

Hi,

I have a production app that uses catbox caching quite extensively. 95% of my values are stored with redis, since I am using multiple containers and I want them to use all the same cache. However I have some global values that are used that often, that I thought I can cache it on every container alone to improve read speed on these cache values.

During my load test it turned out, that catbox-redis (with latency on aws) is faster than catbox memory, which I found confusing.

Anyone experiences something similar or has an Idea why that is the case?

Memory limit does not restrict data size properly

Currently running a catbox cache using config of

        "pack": {
                "cache": [
                        {
                                "engine": "catbox-memory",
                                "shared": true
                        }
                ]
        },

Which should default to a 100MB cache limit. Currently seeing instances of this cache in the 389MB range as reported by heapdump while under load. Trying to get more information about the state, but mdb is not cooperating right now.

Ability to store cache item forever

It would be nice if this engine would support passing a ttl value of 0 to store an item in the cache forever and be more compatible with other engines.

Error: Cannot find module 'big-time'

By some reason from today my docker build is failing with error

Error: Cannot find module 'big-time'
at Function.Module._resolveFilename (module.js:538:15)
at Function.Module._load (module.js:468:25)
at Module.require (module.js:587:17)
at require (internal/module.js:11:18)
at Object. (/usr/src/app/node_modules/hapi/node_modules/catbox-memory/lib/index.js:5:17)
at Module._compile (module.js:643:30)
at Object.Module._extensions..js (module.js:654:10)
at Module.load (module.js:556:32)
at tryModuleLoad (module.js:499:12)
at Function.Module._load (module.js:491:3)
Error: Cannot find module 'big-time'
at Function.Module._resolveFilename (module.js:538:15)
at Function.Module._load (module.js:468:25)
at Module.require (module.js:587:17)
at require (internal/module.js:11:18)
at Object. (/usr/src/app/node_modules/hapi/node_modules/catbox-memory/lib/index.js:5:17)
at Module._compile (module.js:643:30)
at Object.Module._extensions..js (module.js:654:10)
at Module.load (module.js:556:32)
at tryModuleLoad (module.js:499:12)
at Function.Module._load (module.js:491:3)

Will fill out more info once received

Error Handling is not done properly

Runtime

nodejs

Runtime version

16.15

Module version

4.1.1

Last module version without issue

No response

Used with

hapi server cache.

Any other relevant information

image Recentyl we have noticed that this error is not getting handled properly.

What are you trying to achieve or the steps to reproduce?

a proper error handling and following that next steps for smooth processing

What was the result you got?

Cache size limit reached, stack: Error: Cache size limit reached\n at module.exports.internals.Connection.set (/opt/bba/server/node_modules/@hapi/catbox-memory/lib/index.js:184:19)\n at module.exports.set (/opt/bba/server/node_modules/@hapi/catbox/lib/client.js:91:31)\n at module.exports.internals.Policy.set (/opt/bba/server/node_modules/@hapi/catbox/lib/policy.js:298:31)\n at internals.onPreResponse (/opt/bba/server/node_modules/@hapi/yar/lib/index.js:307:17)\n at async module.exports.internals.Manager.execute (/opt/bba/server/node_modules/@hapi/hapi/lib/toolkit.js:45:28)\n at async Request._invoke (/opt/bba/server/node_modules/@hapi/hapi/lib/request.js:339:30)\n at async Request._postCycle (/opt/bba/server/node_modules/@hapi/hapi/lib/request.js:402:32)\n at async Request._reply (/opt/bba/server/node_modules/@hapi/hapi/lib/request.js:381:9)\n"}

What result did you expect?

It should provide better error handling

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on Greenkeeper branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet.
We recommend using:

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please click the 'fix repo' button on account.greenkeeper.io.

Serialize/deserialize overhead

Hi there.

With one of our recent projects we have a requirement to cache a single object in the web server.
This object is refreshed from the database at regular intervals. Also, the object is read-only. Not in the true sense of the word (with Object.freeze, getters etc.) but in the sense that our application doesn't need to change it. Finally, refreshing the cached object has the potential to return no results, in which case the requirements are to bring the application offline by returning a 500 - Offline type page.

We saw this as a good opportunity to utilise catbox to save ourselves the hassle of cache management. catbox-memory looked perfect for our needs.

All went well until we looked into performance testing and found that the stringify/parse overhead was having quite an impact. Mainly due to the size but also due to the number of times we access it (on every prehandler).

We soon realised what was going on and that to protect against mutating objects in the cache, catbox-memory saves a copy of the object and deserializes it on the way out. We also realized that this is good thing in it works in a consistent way to other adapters that are not in-memory (e.g. redis).

The problem is that we don't have those requirements and we just want to store the object in the cache as-is, aware of the consequences that this may bring in terms of state/mutations. We don't want to write our own cache management or use another package as we'd rather stay in the hapi ecosystem.

We are toying with the idea of writing our own catbox adapter ("catbox-object" maybe?) that is virtually the same as this but without the serialization and with a big WARNING about state/mutating objects in cache.

I wanted to run this past you first to see if you had any suggestions or to see if you thoughts.

Many thanks

Dave

hapi server exits immediately if using catbox-memory cache.

catbox-memory now seems to make hapi crash completely silently.
versions: node 8.7.0, hapi 16.6.2, catbox 9.0.0, catbox-memory 3.0.0

below is a very basic server which exits immediately with Process finished with exit code 0

const Hapi = require('hapi');
const Hoek = require('hoek');
const Mem = require('catbox-memory');

// basic server
const server = new Hapi.Server({
    cache: {
        engine: Mem,
        name: 'test',
        partition: 'test-partition'
    }
});

server.connection({ port: 7000 });

server.start((err) => {

    Hoek.assert(!err, err);
    console.log('Server started at: ' + server.info.uri);
});

commenting out the

    cache: {
        engine: Mem,
        name: 'test',
        partition: 'test-partition'
    }

results in the server starting fine as usual.

Request for better Readme

Could you write a better Readme, where you describe the installation of this library, and an use example where we can clearly see how to use this thing.

Large overhead from expires times when highly populated

Commonly seeing 20-40k timer instances due to a timer being created for each cache entry. Outside of the CPU overhead that this might incur (I don't know the node internals, other than that there is some sort of linked list going on), this does create a lot of memory overhead.

In one particular dump seeing 30megs+ of retained data for these timers, domains, protects, etc that are all associated with each timer instance.

Under our use pattern these timers commonly trigger at very similar times and it seems like with some smart bucketing and a prune setInterval or similar much of this overhead could be dramatically reduced.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.