GithubHelp home page GithubHelp logo

Comments (24)

lucasconstantino avatar lucasconstantino commented on May 6, 2024 19

I did put some work on a side project which might interest you folks: https://github.com/TallerWebSolutions/apollo-cache-instorage

from apollo-cache-persist.

jamesreggio avatar jamesreggio commented on May 6, 2024 2

Thanks for the concrete use case, @fbartho. We're definitely going to get this feature in ASAP!

from apollo-cache-persist.

jamesreggio avatar jamesreggio commented on May 6, 2024 2

Hi @allpwrfulroot — your search and filter use case can probably be handled by Apollo Client right now by changing the fetchPolicy for those queries. fetchPolicy allows you to specify whether to satisfy a query strictly from cached local data, or to use some combination of cached local data and network requests. As for an offline mutation queue, it's something I know is on Apollo's radar, but isn't addressed in this repo. (Your intuition to use apollo-link for queueing offline requests is a good one. If you decide to code it up, please do share!)

Hey @fbartho, I think I follow your use case, but I don't think we're going to be able to solve this problem until Apollo adds finer-grained caching metadata to the client itself. A couple thoughts, though:

  1. Developers tend to build their UI around the assumption that a query result will be 100% absent or 100% complete. It we supporting culling specific types, it would mean that the results would have nulls where normally there would be data. Given your needs, I'm sure you would write code resilient to nested null values, but in general, it's something I think would lead to difficult-to-diagnose bugs for most people.

  2. If your main concern is avoiding the display of stale data, perhaps you could add an updatedAt field to the time-sensitive objects and avoid displaying if they're older than a specific threshold. (Slightly more hackily, you could add a fetchedAt field that the server just writes Date.now() into and use that for filtering.)

  3. For breaking schema changes, you can always use the CachePersistor API to explicitly clear the cache or decline to restore it. Breaking schema changes tend to require some forethought (and app-specific version bookkeeping) already, so we figured that explicit controls would be enough for that use case (especially since Apollo Client operates agnostic to the schema). It's also good practice to have a system for detecting launch crashes, after which you should probably discard all of your cached data.

I'm sorry if it feels like I'm just making excuses for a missing feature. Trust me, I'd love to have better control over what gets cached. I just wanted to share what we're using today to make our app better (i.e., this repo) — the Apollo folks will continue working on the sophisticated stuff :)

from apollo-cache-persist.

jamesreggio avatar jamesreggio commented on May 6, 2024 2

Yeah, that's definitely a larger concern with apollo-link-state, where you may have some persistent state that you don't want to lose, but needs to be migrated. redux-persist has mechanisms for migrations, but it's easier for them since the data that gets persisted is the store state itself, and not some special serialization format, like what we have to use for the Apollo store.

Maybe @jbaxleyiii has thought about strategies for dealing with local apollo-link-state schema changes?

And yeah, to expand upon our app's startup process, we basically do this:

  1. As early in app startup as possible, read a numeric crashes key from AsyncStorage. If not present, assume 0.
  2. Increment crashes by 1 and write to AsyncStorage.
  3. During Apollo setup, if the original value of crashes is > 0, don't restore the Apollo cache.
  4. During Redux setup, if the original value of crashes is > 1, don't restore the user's session token. (Losing the user's session token is significantly worse for UX than just losing the Apollo store, so we only clear it upon two subsequent launch crashes.)
  5. Finally, in the Root component's componentDidMount method, use setTjmeout to write 0 to crashes in AsyncStorage. (This is the point where we're basically saying that the app started successfully. You can place it whereever you feel is appropriate.)
  6. Bonus: in the Root component's comonentDidCatch method, write 1 to crashes. This would likely be the place you'd catch an error pertaining to a schema mismatch.

I've been meaning to write a blog post about this strategy, but hopefully that was clear enough. In the end, you're just using a counter to selectively jettison local data in hopes that the crashes eventually go away.

from apollo-cache-persist.

jamesreggio avatar jamesreggio commented on May 6, 2024 2

It's being considered a high-priority feature in Apollo Client 3.0.

Roadmap is here: apollographql/apollo-feature-requests#33 (comment)

from apollo-cache-persist.

jamesreggio avatar jamesreggio commented on May 6, 2024

I think it's a desirable feature, but I'm not sure the current APIs support it.

To my understanding, the values returned and consumed by extract and restore are meant to be opaque. If apollo-cache-inmemory was the only supported cache, we could safely make assumptions about the shape of that payload and operate upon it; however, I'm already happily using this component with apollo-cache-hermes, which uses a different serialization format.

That said, I think we could go four ways with this:

  1. Develop a standard serialization/normalization format for the values returned/consumed by extract and restore. We could do this in a backwards compatible way by including some magic value in the payload to indicate compliance. With that, we could then safely operate on those values. This probably has the longest lead time, but could conceivably deliver the largest benefit.

  2. Expose some new APIs (or method parameters) on the cache to selectively extract a subset of the cache data. For caches that don't implement it, we could fall back to a full extract/restore. This would probably be quick to implement, but feels like too narrowly tailored to our specific use case to justify a new high-level API.

  3. Just expose hooks and let the users/ecosystem transform however they'd like. I could imagine packages like apollo-cache-persist-inmemory-filter and apollo-cache-persist-hermes-filter cropping up to join the desired logic with knowledge of the specific serialization formats. This is definitely the most brittle, seeing as any change to the opaque serialization format for any cache would likely break these transformers. The only argument I can make in favor to the approach is that we can always just catch any exceptions thrown and fall back to persisting/restoring everything (without transformation).

  4. Implement app-level cache management via directives, per earlier maintainer meeting discussions. This would certainly be the best to me, but isn't going to land anytime soon.

What do you think?

cc: @jbaxleyiii

from apollo-cache-persist.

jbaxleyiii avatar jbaxleyiii commented on May 6, 2024

What about extract taking a query as an option? When possible, I'm a big fan of operations being driven by operations since that is what most people are already used to when working with Graphql / Apollo. The area where this is a problem though is with parameterized fields. We could possibly use a variation of an introspection like query to filter on types but 🤷 that seems like a lot of work.

I think option #3 is the best way to start the filter exploration before we land on any formal placement for it. In fact, between the two teams (hermes + inmemory) we could even include them as part of this package for now. All other cache impls I have seen piggy back on inmemory. I'd be happy to help write this for inmemory.

So tldr; lets start with helper functions baed on what every kind of filter API we want to start with and work back towards formal additions if needed?

from apollo-cache-persist.

jbaxleyiii avatar jbaxleyiii commented on May 6, 2024

The initial API may be something as simple as an array of type names to include everything the cache matching?

from apollo-cache-persist.

jamesreggio avatar jamesreggio commented on May 6, 2024

Hmm... extract taking a query feels a little wrong to me, because I think it implies that you should be able to express everything you'd want to persist in a single query. I could see using such an API to persist just the results of whatever query your app's initial screen loads, but that seems too narrowly tailored.

I'm fine proceeding with option 3; i.e., exposing lifecycle hooks and delegating to other packages. Again, just going off our use case, this feature isn't going to be useful for us because we cannot express the interesting subset of the cache to persist in the absence of additional metadata, such as time initially store or time last accessed.

Could you elaborate on the typename scenario? Do you think you'd also implicitly include every Query object?

from apollo-cache-persist.

peggyrayzis avatar peggyrayzis commented on May 6, 2024

Thanks for writing up all of the options @jamesreggio! I agree that option #1 would probably be the most beneficial in terms of avoiding breaking changes in the long term, but I do worry that it would take too long to execute. I'd like to eventually move toward option #2 once we learn more about how users want to filter the cache, but it seems like it would be easier to experiment with #3 first.

I really like @jbaxleyiii's suggestion on using a query to filter down the extracted cache. Parameterized fields might be a good way to offer more fine grained control in the future, but for now I think we can get close to what we want just by filtering on the type.

from apollo-cache-persist.

jamesreggio avatar jamesreggio commented on May 6, 2024

Sounds good, @peggyrayzis. I can add some hooks later tonight, and then we can consider learna-fying this repo to include a basic inmemory-compatible query filter?

from apollo-cache-persist.

peggyrayzis avatar peggyrayzis commented on May 6, 2024

Sounds good to me @jamesreggio! 😀 Would you be down to try Yarn workspaces?

Also, I think your use case of filtering on metadata is a really interesting one that we definitely need to consider for the next iteration of cache filtering. I can see it being applied to more than just this project as a way to implement cache garbage collection.

from apollo-cache-persist.

jamesreggio avatar jamesreggio commented on May 6, 2024

Happy to try Yarn workspaces!

from apollo-cache-persist.

fbartho avatar fbartho commented on May 6, 2024

I would be looking to use this as an engine to persist/restore my cache, while working with apollo-link-state and apollo-link-rest. I can't persist my cache at all, safely, unless I can guarantee not to restore certain bits of state. Additionally, with certain API changes, I might need to write a patch that will refuse to restore certain values (or all). (For example if a schema changed types for a field).

Therefore, I strongly support this Issue as a high-priority feature!

from apollo-cache-persist.

giautm avatar giautm commented on May 6, 2024

hey, why not using directives like @ignorePersist to ignore that query?

from apollo-cache-persist.

jamesreggio avatar jamesreggio commented on May 6, 2024

@giautm, that's something the Apollo team is considering for in the longer term; however, it requires deep adjustments to the architecture of the Apollo cache. This module is designed to work with the existing APIs to provide a simple solution in the interim.

from apollo-cache-persist.

allpwrfulroot avatar allpwrfulroot commented on May 6, 2024

Not quite sure if this is one of the pieces I'm looking for or not:

I have a list of Cars (for example) that I'd like to have periodically updated from the backend, but will want to use the on-device cache to service requests like Search and Filter (Ford, yellow, etc). It's important for the app to be offline-friendly, and somehow there will have to be a queue for offline actions like Messages. Event logging will be in the Link mix somewhere.

What's awesome is I know Apollo Links can do it all, I just have to figure out how!

from apollo-cache-persist.

peggyrayzis avatar peggyrayzis commented on May 6, 2024

I experimented with cache filtering, but after talking to @jamesreggio, we don't think it's the best solution. Brain dumping everything so I don't forget!

Original API Proposal: Add a filter property on persistCache for a function that takes extracted data from the cache and returns filtered data. Two of the filters we wanted to implement were filterBySize, which clears the cache after a certain threshold, and filterByType, which takes a query and uses graphql-anywhere to filter down the data.

import { filterBySize, filterByType } from 'apollo-cache-inmemory-filter'

const filterQuery = gql`
{
  user {
    posts {
      title
      body
    }
  }
}
`

persistCache({
  storage,
  cache,
  filter: compose(filterBySize(500000), filterByType(filterQuery))
})
export const filterByType = query => {
  const resolver = (fieldName, root) => root[fieldName];
  return data => graphql(resolver, query, data);
};

This is problematic because the data returned from extract is already flattened from the normalization process. If we were to run graphql on the result, we would have to reconstruct it again. Additionally, filter functions might have to execute either before serialization (filterByType) or after serialization (filterBySize), which complicates things a bit.

New proposal: Hold off on type filters, add a maxSize property to the persistCache config that works similarly to filterBySize

from apollo-cache-persist.

fbartho avatar fbartho commented on May 6, 2024

New proposal: Hold off on type filters, add a maxSize property to the persistCache config that works similarly to filterBySize

I still really would need a filter-mode to prevent certain "ephemeral" types from being in the output. I'm perfectly fine making my filter rule apply to flattened keys. (I was planning on having all the "private" types be prefixed by _ or $. -- That can simply filter all the fields/values out by parent Typename).

Is there a reason this would be no good? (I think I'm making an unfair assumption about the shape of the serialized data).

from apollo-cache-persist.

fbartho avatar fbartho commented on May 6, 2024

I did some thinking about this this weekend, Is the current Cache architecture composable? If I could declare some models only get stored in MemCacheA, while others only get stored in MemCacheB, and then I could configure apollo-client with pseudocode:

new apollo.Client({
    cache: composeCaches(memCacheA, memCacheB)
});

This would make it possible for me to "pre-filter" my caches by object-types, and only persist one of the two memory caches to disk. -- Thoughts @peggyrayzis?

from apollo-cache-persist.

peggyrayzis avatar peggyrayzis commented on May 6, 2024

The cache API is synchronous so unfortunately, we wouldn't be able to compose async caches without a significant refactor. I am looking into passing a custom store to InMemoryCache that can communicate with an async storage provider, but that's out of the scope of this repo.

from apollo-cache-persist.

fbartho avatar fbartho commented on May 6, 2024

Thanks for your response @jamesreggio -- I think you did hit the nail on the head with my concerns. -- I am particularly interested in your 3rd point; when considering apollo-link-rest and apollo-link-state, I'm worried about schema changes occurring across app versions. Crashes caused by this would occur rarely, and more in the wild than not. One mitigation I'll probably have to build is avoid resurrecting the persisted cache across app versions. This is unfortunate, but ultimately, not the end of the world.

from apollo-cache-persist.

amcdnl avatar amcdnl commented on May 6, 2024

Any updates on this?

from apollo-cache-persist.

fbartho avatar fbartho commented on May 6, 2024

@lucasconstantino I love this!

from apollo-cache-persist.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.