GithubHelp home page GithubHelp logo

Cache size about node-lru-cache HOT 1 CLOSED

devDalys avatar devDalys commented on June 18, 2024
Cache size

from node-lru-cache.

Comments (1)

isaacs avatar isaacs commented on June 18, 2024 1

Not a stupid question at all, probably the most important question you could be asking when considering using a cache!

Unfortunately, it is one that I probably can't answer for you, as I've never heard of Strapi and don't know what your app is doing, or what environments it exists in.

Lru-cache gives you a lot of flexibility in how you use it, but that's a double-edged sword, because you need to tune the dials yourself. Some questions to maybe help get you thinking in the right direction about how to approach it:

  • When your app runs, how many "things" typically get loaded frequently at any given time? Sometimes there's a "sweet spot", like, there are 10m records that could be fetched, but in any given day, 1000 of them are fetched 1m times each, and from there, the popularity drops off quickly, with the 2000th most popular one being less than 1% of traffic. The goal is to find the inflection point where you maximize the ratio of cache hits to cache size. (Smallest size to still get a considerable number of cache hits.)
  • How "bursty" is the traffic to the data you're caching? Maybe those most popular items varies hour to hour, but within a given hour, you get a ton of requests for the same 100 things. In general, the more bursty the traffic, the better an LRU cache can help, and you may be able to get away with an even smaller cache size.
  • How much memory does each "thing" consume, and how much memory do you have available? If the things are big, and you are low on memory, cache fewer items. If memory is cheap, and/or the items are small, cache more items.
  • How important is fresh data? This isn't so much about cache size but can inform whether you go with an allowStale or allowStaleOnFetchRejection, allowStaleOnFetchAbort, etc.

There's no magic formula, unfortunately. Sometimes the best thing to do is just add a caching layer, and then watch what happens. Does your 99th percentile performance improve? Does your memory utilization stay reasonable? If you tune the size or the TTL, does that make things better/worse/unchanged? The sweet spot might vary, or even be pretty unexpected. Observability tools can make a world of difference.

Hope that helps!

from node-lru-cache.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.