GithubHelp home page GithubHelp logo

apollographql / apollo-client Goto Github PK

View Code? Open in Web Editor NEW
19.2K 287.0 2.6K 80.63 MB

:rocket:  A fully-featured, production ready caching GraphQL client for every UI framework and GraphQL server.

Home Page: https://apollographql.com/client

License: MIT License

TypeScript 98.63% JavaScript 1.08% Shell 0.05% HTML 0.24%
graphql graphql-client typescript apollo-client apollographql

apollo-client's People

Contributors

abernix avatar abhiaiyer91 avatar alessbell avatar amandajliu avatar benjamn avatar bignimbus avatar brainkim avatar calebmer avatar cesarsolorzano avatar dependabot[bot] avatar github-actions[bot] avatar greenkeeper[bot] avatar greenkeeperio-bot avatar helfer avatar hwillson avatar jerelmiller avatar johnthepink avatar jpvajda avatar kamilkisiela avatar meschreiber avatar peggyrayzis avatar phryneas avatar poincare avatar renovate-bot avatar renovate[bot] avatar rricard avatar shadaj avatar slava avatar tmeasday avatar trevorblades avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apollo-client's Issues

Client design discussion thread

I think the idea I'm tending towards is that the core client just has a registered set of queries that the UI is interested in. Any query aggregation from the UI would be built on top of this. Each query is an event emitter, so that every query consumer can get notified about changes in that query. Here's a crappy diagram:

slack for ios upload

Going to make a better diagram later, heading home now.

Make breaking API changes to align with docs

When I was writing the docs, I decided to write them for the API I want, not the one I have :P

So we should make some changes to the library itself to bring it up to date:

  • Make ApolloClient a default export of the package
  • Make the arguments to ApolloClient an object instead of positional
  • Make sure ApolloClient#query doesn't take a returnPartialResults option
  • Add a few more functions to WatchedQueryHandle

@jbaxleyiii @johnthepink heads up, these are definitely breaking changes. Let me know if you disagree with any of them. You can see the docs for examples of how it would look: https://apollostack.gitbooks.io/apollo-client/content/

Don't send unused variables

This would only make a difference in the case where the variables themselves comprise a significant part of the request.

npm install fails on fresh clone

After cloning a fresh repo, npm install runs tsc, which fails with the following error:

error TS6053: File 'typings/main.d.ts' not found.

Am I missing some dependencies?

Add the ability to manage client and server data together

As a special case of multiple endpoints, I think it would be really valuable to be able to use Apollo Client as a one-stop-shop for client data management.

This would be very similar to using Redux (since it is under the hood) but you would make a typed schema for your client-side data, declare relationships to server data, do client data changes via mutations, and be able to query client and server data in one GraphQL query.

I don't think this would be super hard to build, and sounds like kind of the holy grail of data management.

Set up CI test coverage report

I think there are services that can take our test coverage report and make it look nice so that we can just click to see a page where we can think about what tests might be good to add. Would be good to get source maps in there as well.

Support named fragments

Some questions:

  1. Do we inline the fragments, or try to keep the original names?
  2. What do we do about variables?
  3. Do the fragments return their own data, or does the UI component get it passed in from the parent?

More to think about later. The main question is, how critical is named fragment support for the alpha launch? It's one of the flagship features of the Relay API.

Optimistically batch requests

In many applications, there are a number of ways that data is requested. Lets take a moderately complex react app as the example. In this case, the app has SSR (server side rendering), a single UI tree, a centralized application state (using redux), and has the need to prefetch application data that isn't UI dependent.

Using a direct 1:1 query/operation:request

  • On the server, this app currently renders on the page level component that the router specifies. This is common in react-router apps that allow for a static fetchData which returns a promise. This means our page level component would have to handle all data for all of its children, making it inefficient on the client and not super DRY. Subcomponents with data needs will be rendered without data on the server. If all background processes are run on the server (sometimes true, sometimes not) more requests could happen
  • On the client the app will render based on how the UI library handles priority. So top -> bottom or bottom -> top. Either way, each component that specifies its data will make a request. During the tree render, a state based fetch is triggered that is separate from the UI.

Even from this case, we are either making multiple requests to get all the data, or missing data that a single action (like rendering a synchronous UI) will need.

Using a batched system

  • On the server, as the tree is being rendered, we know its a synchronous action so we hold off on the request until we have all of the operations. Then we combine them and make the request.
  • On the client, typically this will act the same as the server on the initial render. On subsequent UI changes, we should batch non blocking queries
  • During the render, state based fetches could insert their query into the queue and be combined with the single / reduced requests.

Things to solve / allow for:

  • blocking queries: if a component needs data to decide which child to render, it must be able to fetch then make the subsequent call
  • multiple execution points: e.g. UI rendering and application state / data fetching
  • merging of variables / separation of data: using $id in multiple arguments and multiple variables: { id: "foo"} variables: { id: "foobar"}

Other considerations:

  • in an http2 world, this may not even be the ideal request pattern?
  • is it more performant on the GQL side to batch requests or have a lot of small operations?

Allow for multiple remote endpoints

One of things things that has me a little confused, is the seemingly lack of versioning recommended by GraphQL implementation articles and examples. Speaking for our team, we are already planning on making our current implementation essentially a pre release (/graphql) and starting with our apollo server build out at api/v1 or graphql/v1.

That being said, I'm interested to get thoughts on using the apollo client with multiple remote endpoints. Here are a few examples / thoughts on the api design

3rd party service + in house service

Let's say we look into the future a year or so. GraphQL has solidified around a standard (already pretty dang close!) and more and more services offer Graph endpoints along with their standard REST endpoints. So I'm building an app that pulls from my app source, and pulls from reddit for example.

// a vanilla, not external redux store example
import ApolloClient, { createNetworkInterface } from "apollo-client";

const Reddit = createNetworkInterface('https://reddit.com/graphql');
const Heighliner = createNetworkInterface('https://api.newspring.cc/graphql');

// we can either specify the remote locations on the queries
// fwiw, I think this is a bad idea ;) but anything is possible right?
// this would mean only on client instantiated with multiple network interfaces passed to it
`
  @Reddit
  query getCategory($categoryId: Int!) {
      category(id: $categoryId) {
        name
        color
      }
    }
`

`
  @Heighliner
  query getCategory($categoryId: Int!) {
      category(id: $categoryId) {
        name
        color
      }
    }
`

const client = new ApolloClient({
  networkInterface: {
    Reddit,
    Heighliner
  }
})

// or we can create multiple clients but will have to figure out the duplicate store key
const RedditClient = new ApolloClient({
  networkInterface: Reddit,
  reduxRootKey: "reddit",
});

const HeighlinerClient = new ApolloClient({
  networkInterface: Heighliner,
  reduxRootKey: "heighliner",
});

Personally I am a fan of instantiating multiple clients as the two endpoints should have no overlap (if they did, your GraphQL server should be handling the linking, not the client). That also gives us the benefit of doing something like this in the future

Remote service + local service. One query / action language to rule them all

// a vanilla, not external redux store example
import ApolloClient, { createNetworkInterface, createLocalInterface } from "apollo-client";

// localGraphQLResolver is a client side implementation of a graphql app
// it allows for mutations to update / change local state of the app
// and queries to get app state
// @NOTE this is just an idea I've been playing with. Hope to have an example soon
const Local = createLocalInterface(localGraphQLResolver); 
const Heighliner = createNetworkInterface('https://api.newspring.cc/graphql');

const App = new ApolloClient({
  networkInterface: Local,
  reduxRootKey: "local",
});

const HeighlinerClient = new ApolloClient({
  networkInterface: Heighliner,
  reduxRootKey: "heighliner",
});

So all in all, I don't think this requires any API changes, just wanted to bring it up 👍

Create a docs site

Even though this library isn't done, writing docs can be very informative. It will help us figure out where we have a coherent story and where we don't. We can also put in placeholders for things we know we need to implement, to sketch out the API.

Merging into cache

Different subtrees of the same query might access different fields on the same object. We should merge them instead of overwriting the whole object.

Variables

Currently, we don't support variables. This is extra useful in mutations. so for now I'll implement simple mutations without variables and then add variable support for both queries and mutations afterwards.

Null values in lists

Lists from GraphQL should all be of the same type, in theory. But, It may also return a null in the middle of a list:

      nestedArray: [
        {
          stringField: 'This is a string too!',
          numberField: 6,
          nullField: null,
        },
        {
          stringField: 'This is a string also!',
          numberField: 7,
          nullField: null,
        },
        null,
      ],

Currently, this will error out:

 Error: Can't find field stringField on result object abcd.nestedArray.2.

Make sure root query that returns object with ID is cached

This test kinda describes the situation:

  it('properly normalizes a query with variables', () => {
    const query = `
      query myQuery($intArg: Int, $floatArg: Float, $stringArg: String) {
        id,
        stringField(arg: $stringArg),
        numberField(intArg: $intArg, floatArg: $floatArg),
        nullField
      }
    `;

    const variables = {
      intArg: 5,
      floatArg: 3.14,
      stringArg: 'This is a string!',
    };

    const result = {
      id: 'abcd',
      stringField: 'Heyo',
      numberField: 5,
      nullField: null,
    };

    const normalized = writeQueryToStore({
      result,
      query,
      variables,
    });

    assertEqualSansDataId(normalized, {
      [result.id]: {
        id: 'abcd',
        nullField: null,
        'numberField({"intArg":5,"floatArg":3.14})': 5,
        'stringField({"arg":"This is a string!"})': 'Heyo',
      },
    });
  });

(This is a note so that I can focus on a different thing)

Unable to npm install on linux

Here is the full log:

➜  apollo-client git:(master) ✗ npm install        
npm WARN package.json Dependency 'graphql' exists in both dependencies and devDependencies, using 'graphql@^0.4.17' from dependencies
npm WARN package.json Dependency 'lodash' exists in both dependencies and devDependencies, using 'lodash@^4.6.1' from dependencies
npm WARN peerDependencies The peer dependency webpack@^1.0.0 included from babel-loader will no
npm WARN peerDependencies longer be automatically installed to fulfill the peerDependency 
npm WARN peerDependencies in npm 3+. Your application will need to depend on it explicitly.
npm WARN peerDependencies The peer dependency eslint-plugin-react@^4.0.0 included from eslint-config-airbnb will no
npm WARN peerDependencies longer be automatically installed to fulfill the peerDependency 
npm WARN peerDependencies in npm 3+. Your application will need to depend on it explicitly.
npm WARN deprecated [email protected]: graceful-fs version 3 and before will fail on newer node releases. Please update to graceful-fs@^4.0.0 as soon as possible.
npm WARN optional dep failed, continuing [email protected]
-
> [email protected] prepublish /home/maxime/github/apollo-client
> npm run compile


> [email protected] compile /home/maxime/github/apollo-client
> babel --presets es2015,stage-0 -d lib/ src/

ReferenceError: [BABEL] src/cacheUtils.js: Unknown option: /home/maxime/github/apollo-client/.babelrc.presets
    at Logger.error (/usr/lib/node_modules/babel/node_modules/babel-core/lib/transformation/file/logger.js:58:11)
    at OptionManager.mergeOptions (/usr/lib/node_modules/babel/node_modules/babel-core/lib/transformation/file/options/option-manager.js:126:29)
    at OptionManager.addConfig (/usr/lib/node_modules/babel/node_modules/babel-core/lib/transformation/file/options/option-manager.js:107:10)
    at OptionManager.findConfigs (/usr/lib/node_modules/babel/node_modules/babel-core/lib/transformation/file/options/option-manager.js:168:35)
    at OptionManager.init (/usr/lib/node_modules/babel/node_modules/babel-core/lib/transformation/file/options/option-manager.js:229:12)
    at File.initOptions (/usr/lib/node_modules/babel/node_modules/babel-core/lib/transformation/file/index.js:147:75)
    at new File (/usr/lib/node_modules/babel/node_modules/babel-core/lib/transformation/file/index.js:137:22)
    at Pipeline.transform (/usr/lib/node_modules/babel/node_modules/babel-core/lib/transformation/pipeline.js:164:16)
    at Object.exports.transform (/usr/lib/node_modules/babel/lib/babel/util.js:40:22)
    at Object.exports.compile (/usr/lib/node_modules/babel/lib/babel/util.js:49:20)

npm ERR! Linux 4.3.3-040303-generic
npm ERR! argv "/home/maxime/.nvm/versions/node/v4.0.0/bin/node" "/home/maxime/.nvm/versions/node/v4.0.0/bin/npm" "run" "compile"
npm ERR! node v4.0.0
npm ERR! npm  v2.14.2
npm ERR! code ELIFECYCLE
npm ERR! [email protected] compile: `babel --presets es2015,stage-0 -d lib/ src/`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] compile script 'babel --presets es2015,stage-0 -d lib/ src/'.
npm ERR! This is most likely a problem with the apollo-client package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     babel --presets es2015,stage-0 -d lib/ src/
npm ERR! You can get their info via:
npm ERR!     npm owner ls apollo-client
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /home/maxime/github/apollo-client/npm-debug.log

npm ERR! Linux 4.3.3-040303-generic
npm ERR! argv "/home/maxime/.nvm/versions/node/v4.0.0/bin/node" "/home/maxime/.nvm/versions/node/v4.0.0/bin/npm" "install"
npm ERR! node v4.0.0
npm ERR! npm  v2.14.2
npm ERR! code ELIFECYCLE
npm ERR! [email protected] prepublish: `npm run compile`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] prepublish script 'npm run compile'.
npm ERR! This is most likely a problem with the apollo-client package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     npm run compile
npm ERR! You can get their info via:
npm ERR!     npm owner ls apollo-client
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /home/maxime/github/apollo-client/npm-debug.log

Mutations

The ability to do mutations is one of the primary features of a GraphQL client. A mutation looks like this:

mutation {
  createAuthor(
    _id: "john",
    name: "John Carter",
    twitterHandle: "@john"
  ) {
    _id
    name
  }
}

You can pass multiple mutations in the same query, as well. If you want to pass the arguments as variables, you have to specify them after the mutation keyword, and then also indicate where those variables should be used:

mutation createAuthor($_id: String, $name: String, $twitterHandle: String) {
  createAuthor(
    _id: $_id,
    name: $name,
    twitterHandle: $twitterHandle
  ) {
    _id
    name
  }
}

// Need to send these variables
{
  _id: "john",
  name: "John Carter",
  twitterHandle: "@john" 
}

As you can see, this is quite verbose, with the names of the variables repeated roughly four times. There are a couple ways to solve this:

1. Relay adopts a mutation spec

https://facebook.github.io/relay/graphql/mutations.htm

mutation createAuthor {
  createAuthor(input: $input) {
    clientMutationId,
    status {
      _id
      name
    }
  }
}

// Need to send these variables
{
  input: {
    _id: "john",
    name: "John Carter",
    twitterHandle: "@john"
  }
}

This is simpler because the names of the arguments are only listed once. However, you need a specific input type for every mutation.

But the spec is another restriction on your GraphQL server.

2. Lokka has a mutation wrapper that adds the boilerplate for you

client.mutate(`
    newFilm: createMovie(
        title: "Star Wars: The Force Awakens",
        director: "J.J. Abrams",
        producers: [
            "J.J. Abrams", "Bryan Burk", "Kathleen Kennedy"
        ],
        releaseDate: "December 14, 2015"
    ) {
        ...${filmInfo}
    }
`).then(response => {
    console.log(response.newFilm);
});

They save you from writing the mutation X thing. It works with any GraphQL server. The downside is, you can't actually pass variables because you would need to know the types.

Proposed API

I don't think we need to impose a spec on the server for mutations. For a low-level API, let's just implement the standard mutation API from GraphQL itself, with all of the boilerplate. To eliminate it, we probably need to know the types of the variables, which is not optimal.

This is exactly what Jonas ended up with in his example todos app:

client.mutation( {
  mutation: `mutation makeListPrivate($listId: ID!){
    makeListPrivate(id: $listId)
  }`,
  args: { 'listId': listId }
});

Let's explore additional features for this in the future, there's probably some nice stuff we can do to reduce boilerplate. And maybe we want a switch to support the Relay mutation spec.

Network interface should return errors object in promise rejection

Currently, this code in QueryManager:

    this.networkInterface.query([
      request,
    ]).then((result) => {
      const resultWithDataId = assign({
        __data_id: 'ROOT_QUERY',
      }, result[0].data);

      this.store.dispatch(createQueryResultAction({
        result: resultWithDataId,
        selectionSet: queryDef.selectionSet,
      }));
    }).catch((errors: GraphQLError[]) => {
      this.handleQueryErrorsAndStop(watchHandle.id, errors);
    });

Doesn't work! Because catch actually gets a string with some serialized errors. We need it to return the actual error data.

Track query loading state via Redux

We set up the query watcher immediately when the client asks for it, which means that results from other queries will trigger a data broadcast. We need to put a hold on each query until its initial results have arrived (unless the partialData option is passed).

Websocket network interface

Do we want to ship this as part of core or as a separate module?

A sample use case is in a Meteor app where extra variables / authentication happens via a Meteor method on the server.

@johnthepink and I will be building this interface next week and would love to know if we want it in core or as import { createWSNetworkInterface } from 'apollo-client-websocket' or the meteor sugared import { DDPNetworkInterface } from 'apollo-client-ddp'

Set up static typing

I think the complexity of this code is reaching the point where having static type error detection would really help with development speed and documentation. There are a couple of concepts - selection sets, query ASTs, cache objects, etc - that are being passed around constantly and it can be hard to figure out errors where those objects aren't properly formatted. If there's a solution to this problem that doesn't involve static typing, I'm all ears.

As far as I can tell, there are two options for static typing in JavaScript:

  1. TypeScript: http://www.typescriptlang.org/
  2. Flow type: http://flowtype.org/

TypeScript seems to be an all-in-one solution that also covers transpilation of new JS features in addition to type annotations. It also has integration into IDEs like visual studio code, which might be nice to use to avoid having to set up a bunch of stuff.

On the other hand, Flow is used in Facebook so it might be possible to take advantage of existing definitions in the GraphQL and Relay codebases.

On the third hand, TypeScript is starting to become a thing in the Angular/Angular 2 community.

What about contributors?

I think the core of this project will be pretty hard to contribute to anyway. I think the best thing we can do is have very clean APIs and as minimal a core as possible so that people can easily build extensions. IMO it would be OK if a contributor to the core caching logic has to understand static typing, as long as we keep this core as small as possible (I think this repo won't include router integrations, view integrations, etc - just the caching, diffing, fetching, and aggregation of queries).

Really interested to get feedback about this from some people - @jbaxleyiii, @arunoda, @Urigo, @benjamn?

Handle network errors

Right now, we handle GraphQLErrors just fine, but we don't have anything for network and runtime errors.

Garbage collection

In the current code, the store will grow without bound as more and more queries are fetched. We should implement a strategy for clearing out data we are no longer using.

This can be as simple as re-running all of the active queries and putting the results into a new empty store, then throwing away the old one.

Add __typename fields to queries automatically

This can be done in a lot of different places. I think actually it might make sense to do it close to where the query is created, for example in watchQuery in QueryManager.

I mentioned earlier doing it in the network interface, but it kind of makes sense for that thing to just handle query fetching. It's a natural place to do it in Relay, but that's mostly because the network layer is basically the only documented integration point in Relay.

This should be configurable - if you have a Relay spec-compliant server, it will complain when you ask for id on an object that doesn't have that field. But if you have a server that returns null on non-existent IDs, we can just put it in everywhere. Luckily, this is not a problem with __typename because it's guaranteed to exist everywhere by the GraphQL spec itself.

Update to include typescript 1.8 features

'll do the update and list out the new features that are meaningful here. 👍

Note
We have been using 1.8, but typescript just updated their site and have a good breakdown of 1.8 features. A few of them will be helpful here so I'll document them as tooling enhancements and open a corresponding PR with the adjustments

Resolve node queries directly from store

When we have a query like:

{
  node(id: "1") {
    __typename,
    id,
    ... on Person {
      name
    }
  }
}

If we already have a node with ID 1 in the store, we should be able to retrieve it. Right now, this will not realize that this is just looking for a particular node, and hit the network with a new root query. I guess this also means, when we receive the result of a root node query, we might want to not cache it under the root query, but instead only under the ID of the node.

Add source maps to test coverage

We can remap the .lcov file before sending it to code climate using some tools that can be found with Google. Not super high priority though since TypeScript compiled code is pretty easy to read.

Pagination idea: type-specific refetching plugins

In some sense, handling pagination in a smart way is just a special case of "when you ask for data, treat certain fields in a special way because we know that the arguments are meaningful." So if we're looking at a Relay spec paginated list:

// first fetch
{
  user {
    id
    name
    friends(first: 10) {
      edges {
        cursor
        node {
          id
          name
        }
      }
      pageInfo {
        hasNextPage
      }
    }
  }
}

// new query
... same stuff
friends(first: 20)

// second fetch, doesn't fetch all 20 but uses the existing information
... same stuff
friends(first: 10, after: "lastCursor")

Notice how easy it is for us as humans to imagine what data needs to be fetched to satisfy the new query.

Here's a set of hypotheses:

  1. The initial fetch doesn't need any up-front information about types or pagination - it can look at the query result to know what to do, as long as we inject __typename fields where necessary.
  2. The re-fetch can determine the new query by the information in the store, which now has type annotations and the arguments from the new query
  3. The transformation from (2) can be written as a pure function that can be injected into the apollo client and associated with certain type names

Basically, you could write a function and tell the apollo client:

"When refetching any field which refers to an object with a type matching this regular expression, give me the query you were going to fetch, and the contents of the store, and I'll give you a new query that says how to fetch the missing data."

So, for Relay pagination, you'd say:

client.registerPaginationPlugin(/.+Connection/, ({ state, id, selectionSet }) => {
  ... do work ...

  return {
    // as much of the result as can be found
    result,

    // if empty, then the cache is sufficient and result contains the data.
    // otherwise, an array of queries that need to be fetched
    missingSelectionSets, 
  }
});

Ideally this will allow plugging in to different pagination systems, as long as the paginated fields have predictable type names, for example *PaginatedList or Paginated*.

If we can do this, it will achieve some really nice effects:

  1. You don't necessarily need to use the Relay pagination spec, which can be hard to translate to some REST APIs
  2. The store can avoid being concerned with pagination if the API above is indeed sufficient

This is just a very ambitious idea, and I'm eager to determine if this very simple model can actually handle all cases of pagination and Relay connections in particular. More analysis to come soon.

@jbaxleyiii @helfer curious what you think about this.

Note on how to make a GraphQL server always accept `id` in a query

From @helfer:

"actually, I’ll just write it down here: it’s in execute.js, all the way at the end in getFieldDef. I think we should just return a default id field if it doesn’t exist on the type. That’s also where the __typename stuff gets handled."

Just something to note when we add the functionality to add id and typename to all queries.

Turn on query diffing

We have the capability to diff a new query against the the store to avoid fetching data we already have. Currently, that functionality is not exposed through index.js at all. There are some tests, but any new query passed to watchQuery is run against the server in its entirety. We should add this functionality.

Options

There are 3 variants (with small deviations) of what you might want to do when fetching a new query:

  1. Do not diff against the store, always fetch the entire new result (what we do now)
  2. Use the cache if the whole result is there, but if only part of it is available, send the whole new query
    1. Only call the onResult callback when the server response returns
    2. Call onResult immediately with partial data, then again with complete data
  3. If the server follows the Relay object ID spec, send a minimal set of queries to the server to fetch the data we don't have yet
    1. Same as 2.i
    2. Same as 2.ii

This means there are several bits of data required to pick a fetching strategy for a particular query:

  1. Knowledge about what the server supports (for example, can we refetch a single node by ID?)
  2. Do we definitely want up-to-date data for this query (should we use the cache at all)
  3. Is partial data preferable to no data at all (should we return partial data from the cache immediately)

(1) should be determined when instantiating the client, and (2) and (3) should be options to watchQuery.

Defaults

I think the default should be the least surprising set of options, which makes no assumptions about the environment:

  1. Assume the server doesn't support Relay
  2. Assume every query should fetch from the server
  3. Assume partial data is not OK

There's a small cost that uninformed users will assume the client doesn't have these features, but it's up to us to make them easily discoverable.

Get a community started

Once we have some users for this thing, we would need a place to chat and see what people are doing with it.

Collection helpers

One of the things I have missed most when using redux as my client side data store, has been the lack of query / sort / filter / etc operations that we had with Mongo.Collection.

Since part of the GQL spec will require id, could we replicate the Mongo collection methods for easy manipulation / fetching of data from the store to use in the UI?

Compile list of changes before alpha release

@jbaxleyiii alluded to this in a comment earlier. If we want people to start testing this on basic queries and stuff, we should put out an alpha release where there is a clearly documented set of supported features. We should start writing down which features that should include, and see if that represents a coherent story. (For example, a drop-in replacement for Lokka presents a pretty good story, especially if there are a few crucial features in addition)

Write simple arrays to store

Currently there is an issue with writing simple arrays to the store. Arrays of objects work fine ([{}, {},]), but arrays of strings do not (['one', 'two',]). It throws this error:

TypeError: Cannot read property 'selections' of null

I played around in the tests, and it seems there is no issue reading an array of strings from the store once it is in there.

This seems like something it should support, and I'm going to work on a solution. Let me know if it's not. Thanks!

Pluggable authentication in the network interface

After you log in to the server, you should be able to tell the client to send along a login token with every request.

This could also work as a feature to add a pluggable middleware to the network interface or client.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.