GithubHelp home page GithubHelp logo

arthurfranca / apicache-plus Goto Github PK

View Code? Open in Web Editor NEW
23.0 23.0 12.0 1.33 MB

Effortless API-caching middleware for Express/Node.

License: MIT License

JavaScript 100.00%
api cache express fast javascript json koa memory middleware node redis response rest restify

apicache-plus's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

apicache-plus's Issues

Ways to disable compression

I see you enabled compression by default - but is there a way to disable it globally? So far I rely on adding 'cache-control': 'no-transform' to the headers, but is this clean? Maybe we could document this somewhere if it is clean enought, or add a toggle option.

Can't use 'head' request method

use:

const onlyPass = (req, res) => {
  // Can't get head request
  return res.statusCode === 200 
};
server.use(apicache('1 hour', onlyPass));

server.head('/health', (req, res) => {
  return res.sendStatus(200);
});

error:

<--- Last few GCs --->

[904:0x10264a000]    35898 ms: Mark-sweep 1386.3 (1432.9) -> 1380.4 (1429.4) MB, 1114.5 / 0.0 ms  (average mu = 0.142, current mu = 0.061) allocation failure scavenge might not succeed
[904:0x10264a000]    35911 ms: Scavenge 1386.9 (1429.4) -> 1381.6 (1431.4) MB, 5.4 / 0.0 ms  (average mu = 0.142, current mu = 0.061) allocation failure 
[904:0x10264a000]    35920 ms: Scavenge 1388.2 (1431.4) -> 1382.8 (1432.9) MB, 4.8 / 0.0 ms  (average mu = 0.142, current mu = 0.061) allocation failure 


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x2a030425be3d]
Security context: 0x1a80e161e6e9 <JSObject>
    1: /* anonymous */ [0x1a80eb2f5971] [/Users/xiaoyu/Work/jyxb-website/pc-v2/node_modules/apicache-plus/src/apicache.js:~1277] [pc=0x2a030481aa63](this=0x1a801ca1ad81 <JSGlobal Object>,cached=0x1a80eb2026f1 <undefined>)
    2: StubFrame [pc: 0x2a03042419b1]
    3: StubFrame [pc: 0x2a030421f8ec]
    4: EntryFrame [pc: 0x2a0304204ba1]
    5: ExitFrame [pc: 0...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003c597 node::Abort() [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
 2: 0x10003c7a1 node::OnFatalError(char const*, char const*) [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
 3: 0x1001ad575 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
 4: 0x100579242 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
 5: 0x10057bd15 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
 6: 0x100577bbf v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
 7: 0x100575d94 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
 8: 0x10058262c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
 9: 0x1005826af v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
10: 0x100551ff4 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
11: 0x1007da044 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/xiaoyu/.nvm/versions/node/v10.15.3/bin/node]
12: 0x2a030425be3d 
error Command failed with signal "SIGABRT".

Caching happens only on the 2nd request to the endpoint

When requesting an endpoint, which in turn accesses an external API, the first request to the endpoint does not get cached, only the second one. The same happens after the cache entry expires. The first request after expiry does not get cached, only the second.


GET /character/95334664 [STARTED]
res statuscode:200
GET /character/95334664 [FINISHED] 172.248 ms
GET /character/95334664 [CLOSED] 173.735 ms

GET /character/95334664 [STARTED]
res statuscode:200
GET /character/95334664 [FINISHED] 137.996 ms
GET /character/95334664 [CLOSED] 138.306 ms

[apicache] adding cache entry for "get/character/95334664{}" @ 1 minute - 139ms

duration work for sum second

 let cache=await apicache.has('stream'+req.params.id+req.params.slug);
        console.log(cache)
    if (cache) {
        console.log(777);

        let cache = await apicache.get('stream'+req.params.id+req.params.slug);
        const { Readable } = require('stream'); 
        const readable = new Readable()
        // readable._read = () => {} // _read is required but you can noop it
        readable.push(cache)
        readable.push(null);
        console.log(888);
        readable.pipe(res);
        readable.end();
        
        // return streamify([...bodyparts, os.EOL]).pipe(res);
        // chunks.pipe(res);


      }

 if(isBuy == true){ await apicache.set('stream'+req.params.id+req.params.slug, body,'1 day'); }
            else if(storeEpisode.type.includes('free')){ 
                await apicache.set('stream'+req.params.id+req.params.slug,body,'1 day'); 
            }

_acquireLockWithId is not a function

Hello,

I often have a TypeError "that._acquireLockWithId is not a function" line 95 of redis-cache.js

It seems there is no function called _acquireLockWithId in the file !

How to replace existing cached data for endpoint?

I would like to replace the existing data returned by a Redis-cached endpoint.

I have a background process that is given the key for the cache as generated by a call to getKey(), so I should be using the same key as apicache-plus does.

My issue is that the although the background process inserts new data into the cache, when the endpoint is hit again then no data is returned, it's as if the cacheObject does not contain the data:

Printing out the cache object that is stored by the initial cache of the endpoint gives:

[apicache] AC: cacheObject = { headers: { 'x-powered-by': 'Express', 'access-control-allow-origin': '*', 'content-type': 'application/json; charset=utf-8', etag: 'W/"a11-7HqZ9k3jIh29rVTC8CNeY8qUBTE"', 'cache-control': 'max-age=30, must-revalidate', vary: 'Accept-Encoding', 'content-encoding': 'br' }, 'data-extra-pttl': 238, status: 200, encoding: 'binary', 'data-token': 'a668dc91-4bca-42a0-8b46-7ddc467f2034', timestamp: 1664485660278, key: 'get/http://localhost/api/v1/dashboard/net?branch=baz&project=bar&user=foo{}' }

The endpoint returns the expected cache data.

But when I call apicache.set(key,data) with the same key as in the structure above then the cache object becomes:

[apicache] AC: cacheObject = {"key":"get/http://localhost/api/v1/dashboard/net?branch=baz&project=bar&user=foo{}","data":{"computedAt":1664485680700,"data":[{...}]}}

and the endpoint returns nothing at all.

The newly stored data doesn't have same shape as the data that was originally stored.

So, how should we use apicache.set() to update the cached data of an endpoint?

Thanks

How to set cache duration by "expires" header

More of a question than an issue really:

I use apicache-plus version 2.3.0 with an ioRedis database.

There is an external API that I would like to cache, which responds with an "expires" header as the only reliable indication of response expiration. The access-control-max-age doesn't really help, as this is set to a default value, which doesn't have anything to do with the real expiry.

The very hacky way I have tried to include this is in redis-cache.js directly before writing to DB:

var final = function(cb) {
          if (hasErrored) return cb()

          try {
            var chunkCount = Math.ceil((byteLength || 1) / highWaterMark)
            var serverToRedisLatency = POOR_SERVER_TO_REDIS_LATENCY * chunkCount
            var dataMaxAllowedTimeToRead = Math.round(
              byteLength / TYPICAL_3G_DOWNLOAD_SPEED + serverToRedisLatency
            )
            var group = getGroup()
            var value = getValue()
            if (cacheEncoding === 'buffer') {
              if ((value.headers['content-encoding'] || 'identity') === 'identity') {
                value.encoding = 'utf8'
              } else value.encoding = 'binary' // 'alias to latin1 from node >= v6.4.0'
            } else {
              value.encoding = cacheEncoding || 'utf8'
            }

            // patch for "expires" header
            expireAt = Date.parse(value.headers['expires'])

           

Is there an alternative way to achieve "cache expiry = expires header"?

Decompression failed error with node-redis when serving cached response

Hello, I am getting an error when trying out the middleware.

I recently tried to switch apicache with this fork. As I understand, it has the same API but whenever the middleware tries to respond with cached response I get a Decompression failed in postman and requests send using axios just fail with no error code.

const apiCache = require('apicache-plus')

const redisClient = redis.createClient(getRedisConfiguration({ detect_buffers: true })) // this gets node-redis v3 client 
const redisApiCache = apiCache.options({
  redisClient,
  include: [200, 201]
})

Redis shows the cached call along with another entry that seems to hold the compressed response.
Any help will be much appreciated.

clear cache with redis / group?

Hi,

I'm having a bit of an issue with being able to clear the cache stored on redis. Looking at the docs I can see the recomended way to clear cache is to use apicacheGroup

req.apicacheGroup = 'bookList'

So, in my route, I have the apicachegroup at the beginning of the route. When I look at my redis instance, I see the group name that was created with the request.

Now, when I try and 'clear' the cache using

apicache.clear('bookList')

It appears to 'clear' the group that was created, however the actual data 'keys' remain, I also noticed that when the cache timeout hits, it will clear our the actual cached items, but leaves the group?

I'm probably not setting this up correctly, however the docs make no mention of how to clear with redis so I'm a bit confused by this.

Basically, the flow I'm looking to do is

  1. API returns JSON
  2. User updates record on CMS
  3. After update a purge to that specific cache 'group'
  4. Next request will get updated data

Thanks for any help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.