absinthe-graphql / absinthe Goto Github PK
View Code? Open in Web Editor NEWThe GraphQL toolkit for Elixir
Home Page: http://absinthe-graphql.org
License: Other
The GraphQL toolkit for Elixir
Home Page: http://absinthe-graphql.org
License: Other
and an example of its use.
I have a RethinkDB table with a document that has the key verificationCode
. Since this data already exists it's not easy to migrate it to verification_code
.
Is there a way to change the field
to take a name? I thought I read somewhere that it's possible but can't seem to find it in the docs.
My current type looks like this (removed other fields for clarity):
defmodule Badger.Types.User do
use Absinthe.Schema.Notation
@desc "A registered user"
object :user do
field :verification_code, :string
end
end
I've tried using camel case in the type module but I get a "Field 'verificationCode': Not present in schema"
Is there any way to work around this?
Noticed the following in my console today. Note organization_id
vs organizationId
(the latter would be expected) in the field error:
{message: "Field `dashboardResults': 1 required argument (`organization_id') not provided",…}
{message: "Argument `organizationId' (ID): Not provided", locations: [{line: 1, column: 0}]}
Ensure data
is not set, per the specification.
Looking at the need to introspect enumValues
in the spec, I see deprecation support included, something that I skipped when building out Type.Enum
initially.
I'm thinking this, plus the fact we're having to provide a mapping of external/internal values when creating an enum probably points at a need for the type definition to be rethought a little, possible with the inclusion of a fields
/args
-like function to more easily define values (and, in conjunction with deprecate
, support deprecating them).
The argument building code is inarguably the worst in the project. Now that we have it, I think we'd be able to clean it up quite a bit by using Absinthe.Traversal.reduce/4
to walk the schema and extract/coerce the argument values.
Copy pasting description from Sangria ( http://sangria-graphql.org/learn/#middleware )
Support generic middleware that can be used for different purposes, like performance measurement, metrics collection, security enforcement, etc. on a field and query level. Moreover it makes it much easier for people to share standard middleware in a libraries. Middleware allows you to define callbacks before/after query and field.
It would be a great addition to Absinthe.
As in Sangria, Middleware are especially useful when you can attach arbitrary meta to your fields:
In order to ensure generic classification of fields, every field contains a generic list or FieldTags which provides a user-defined meta-information about this field (just to highlight a few examples: Permission("ViewOrders"), Authorized, Measured, Cached, etc.)
It's been started in test/specification
; the boilerplate needs to be completed, filled in, and the :specification
tag no longer skipped.
This is largely already done, but need to verify.
Per specification.
Suppose:
field :thing, :thing do
arg :id, :id, default_value: "1"
end
query Foo($id: ID) {
thing(id: $id)
}
If you don't pass in an id
variable the default is not selected.
While they do get the behaviour warning during compilation, it would also be nice to do better than getting a CompileError
with undefined function query/0
.
I was using the Absinthe.Utils.camelize
function and thought it would be nice to have an option to return an atom instead of a binary. Is this something that you would be interested in? If so I would be happy to PR tomorrow.
Example:
# pass in as opt
Absinthe.Utils.camelize("foo", lower: true, atom: true)
>>> :foo
# instead of piping after
Absinthe.Utils.camelize("foo", lower: true)
|> String.to_existing_atom
>>> :foo
Walk through the modules and set @moduledoc
appropriately.
It would make sense to include documentation for what values the @absinthe
module attribute can take in the http://hexdocs.pm/absinthe/Absinthe.Type.Definitions.html module because it's the one you use
when making types.
Only sketched out so far
Sometimes in can be helpful to perform some analysis on a query before executing it. An example is complexity analysis: it aggregates the complexity of all fields in the query and then rejects the query without executing it if complexity is too high. Another example is gathering all Permission field tags and then fetching extra user auth data from external service if query contains protected fields. This need to be done before the query started to execute.
Sangria reference: http://sangria-graphql.org/learn/#query-reducers
This may already be known and may be related to #62 and #65.
Using a reserved word as a variable name causes the parser to error, for example:
Absinthe.parse("""
mutation CreateThing($input: Int!) {
createThing(input: $input) { clientThingId }
}
""")
# {:error,
# %{message: "An unknown error occurred during parsing: no function clause matching in :absinthe_parser.extract_binary/1"}}
If you modify this test case in test/lib/absinthe/parser_test.exs
to look like this, it also exposes the issue:
it "can parse queries with arguments and variables that are 'reserved words'" do
@reserved
|> Enum.each(fn
arg ->
assert {:ok, _} = Absinthe.parse("""
mutation CreateThing($#{arg}: Int!) {
createThing(#{arg}: $#{arg}) { clientThingId }
}
""")
end)
end
@benwilson512 I'd appreciate it if you'd look through the documentation/code and look for any show stoppers that need to be fixed before our soft v0.1.0 release, and add them to the v0.1.0 milestone.
To support loading types from other modules, you can pass a :type_modules
option to use Absinthe.Schema
, eg:
defmodule App.Schema do
use Absinthe.Schema, type_modules: [App.Schema.ObjectTypes, App.Schema.ScalarTypes]
# ...
end
This needs to be documented.
Right now sub fields inside an InputObjectType are not checked against the schema, only top level keys. Note that they ARE successfully removed from the parameters it seems, but there isn't an error collected for it.
Today I ran into a scenario where I wanted the default value for a field named before
to be the current time, but any value we give to default_value
is gonna be hard coded as it's evaluated at compile time.
Since we already go through the effort of handling anonymous functions, I think we ought to allow a default value to also be a zero arity function that would get called at runtime.
Thoughts? @bruce
Now that we're using Execution.Arguments
to build argument maps for directives, we need to update the @spec
types to show other than just fields can be passed in.
I discussed this with @benwilson512 in Slack briefly, but having thought about it further, I don't think that the solution to n+1 queries suggested in this page will work well enough to be a generalised solution.
Essentially the solution proposed right now is to use the AST (passed to every resolve function) to look ahead and work out what relations will be followed, and use that information to preload those relationships (with Ecto).
The issue is this is a very Ecto-centric approach to GraphQL, it won't work at all with a service-oriented architecture, and won't work with any kind of backend that doesn't support joins, or backends which use several different storages. The main use case I have right now is integrating elasticsearch - I could have a Product type, with related categories, tags, users etc; and I may retrieve the products and facets from elasticsearch, then want to retrieve their relations from my postgresql database.
Dataloader takes the approach of letting you define loaders, that you call with individual IDs, which are aggregated and turned into a single request to whatever backend you may have. The results are then passed back to the relevant callers. I don't know enough about Elixir to know whether this approach is possible (the JS implementation detail relies on certain details about how JavaScript works which probably don't exist in Elixir), so an alternative might be needed (the idea that springs to mind is the ability to define a resolve_batch
function against a field). But what's clear to me is that every language that implements a GraphQL server, also needs a solution to this problem.
For an non-existent interface, we see, eg:
(RuntimeError) No type found in [:node]
(absinthe) lib/absinthe/schema/rule/object_interfaces_must_be_valid.ex:32: anonymous fn/4 in Absinthe.Schema.Rule.ObjectInterfacesMustBeValid.check_type/2
The GraphQL spec has been updated in April, the list of changes can be found here:
https://github.com/facebook/graphql/releases/tag/April2016
Hopefully this is not too painful...
Right now to get the "current object" in a resolver, you need to do something like this:
resolve: fn
_args, execution ->
# do something with execution.resolution.target
end
Since the object is really the current scope of the schema node in question, this happens often enough, and the internals of the Execution.t
are best considered a "private API," I think it makes sense to pass it as the first argument to the resolver, eg:
resolve: fn
obj, _args, _execution ->
# do something with obj
end
(This would bring it generally in line with other implementations.)
Or, alternately, have some other contract we expect from resolvers.
What do you think, @benwilson512 ?
Issue originally found in #62, and occurs during value coercion when building the variables for the execution context.
** (MatchError) no match of right hand side value: %Absinthe.Type.InputObject{__reference__: %{identifier: :create_user_inp
ut, location: %{file: "/web/graph/schema/type/user.ex", line: 22}, module: Graph.Schema.Type.U
ser}, description: nil, fields: %{email: %Absinthe.Type.Field{__reference__: %{identifier: :email, location: %{file: "/web/graph/schema/type/user.ex", line: 23}, module: Graph.Schema.Type.User}, args: %{}, default_value:
nil, deprecation: nil, description: nil, name: "email", resolve: nil, type: %Absinthe.Type.NonNull{of_type: :string}}, name: %A
bsinthe.Type.Field{__reference__: %{identifier: :name, location: %{file: "/web/graph/schema/type/user.e
x", line: 24}, module: Graph.Schema.Type.User}, args: %{}, default_value: nil, deprecation: nil, description: nil, nam
e: "name", resolve: nil, type: %Absinthe.Type.NonNull{of_type: :string}}}, name: "CreateUserInput"}
(absinthe) lib/absinthe/execution/variables.ex:95: Absinthe.Execution.Variables.coerce/2
(absinthe) lib/absinthe/execution/variables.ex:66: Absinthe.Execution.Variables.valid/4
It looks like we need to handle this case specifically in Variables.build
and Arguments.build
.
Make the following changes
Remove inconsistent: "Type" suffixes
Simply field type
Currently validation is only implemented in parts (focused on field, variable, and argument validation) and is conflated with execution.
Validation should be rebuilt as a separate (optional, per the spec) phase, using Absinthe.Traversal.reduce/4
if at all possible.
Currently we catch name/identifier collisions across modules, but not within modules, which can lead to frustration, especially for new schema developers that aren't used to GraphQL's constraints (and may be doing a lot of copy and paste, building a schema).
Some cases to check for:
@absinthe :type
(Elixir warns on the identical function, but we should do more)@absinthe type: :same_custom_identifier
%Absinthe.Type.Object{name: same_name}
We should bake the post processing that's done on a schema into the schema module itself more, so that calling .schema
is not such an expensive process.
General ideas include:
For type maps
def __query_type_by_name__("User"), do: MyTypeModule.user
def __query_type_by_identifier__(:user), do: MyTypeModule.user
Using on_definition hooks to introspect on the AST of types / queries / mutations as they're created to do the transformations then, and inject them into other functions.
Things that won't work: Unquoting the .schema
as it currently looks somewhere. Anonymous functions can't be escaped, so they can't be unquoted into anything.
query IntrospectionQuery {
__schema {
queryType { name }
mutationType { name }
types {
...FullType
}
directives {
name
description
args {
...InputValue
}
onOperation
onFragment
onField
}
}
}
fragment FullType on __Type {
kind
name
description
fields {
name
description
args {
...InputValue
}
type {
...TypeRef
}
isDeprecated
deprecationReason
}
inputFields {
...InputValue
}
interfaces {
...TypeRef
}
enumValues {
name
description
isDeprecated
deprecationReason
}
possibleTypes {
...TypeRef
}
}
fragment InputValue on __InputValue {
name
description
type { ...TypeRef }
defaultValue
}
fragment TypeRef on __Type {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
}
}
}
}
The default resolver does a Map.get/2
to retrieve a field value. In some cases (see #76) it may be desirable to modify the default resolver in some way, and in a single location.
Ensure the coercion parts of https://facebook.github.io/graphql/#sec-Lists are implemented.
When I try to parse a Relay mutation, I get the following exception:
** (FunctionClauseError) no function clause matching in :absinthe_parser.extract_binary/1
(absinthe) src/absinthe_parser.yrl:210: :absinthe_parser.extract_binary("input")
(absinthe) src/absinthe_parser.yrl:105: :absinthe_parser.yeccpars2_72/7
(absinthe) /opt/boxen/homebrew/Cellar/erlang/18.2.1/lib/erlang/lib/parsetools-2.1.1/include/yeccpre.hrl:57: :absinthe_parse
r.yeccpars0/5
(absinthe) lib/absinthe.ex:177: Absinthe.parse/1
Heres my schema:
object :user do
field :id, :id
field :email, :string
field :name, :string
end
input_object :create_user_input do
field :email, non_null(:string)
field :name, non_null(:string)
end
mutation do
@desc "Create a user"
field :create_user, type: :user do
arg :input, non_null(:create_user_input)
resolve &Resolver.User.create/2
end
end
Relay submits a mutation like:
mutation CreateUserMutation($input_0:createUserInput!){createUser(input:$input_0){clientMutationId}}
:root_value
key in Absinthe.Execution.t
Given the nature of GraphQL, a client can easily send infinitely deep queries which may have high impact on server performance. Absinthe should help guard against this by analyzing query complexity before executing it and should reject it if it goes beyond a given customizable threshold.
This is the ideal use case / application for #107
Being able to limit query depth would also probably be valuable as well.
Given that field names are in general pretty public, how would you feel about using the built in string distance functions to include a "did you mean X?" in field not present error messages?
Right now when you go from the home page to another page, the main nav location jumps. Should probably either:
In the past, we returned results from Absinthe.run
with empty errors
and data
entries, which didn't match the specification. We've fixed that, but it looks like there are a few stragglers in the documentation, namely http://hexdocs.pm/absinthe/Absinthe.Adapter.LanguageConventions.html
We need to ensure we're preventing fragment application cycles during validation/execution.
https://facebook.github.io/graphql/#sec-Fragment-spreads-must-not-form-cycles
Changes would go in the selection set resolution code.
To make defining resolve
functions more organized and add support for rescuing and reporting common errors, we've talked about having a Resolver
behaviour.
Here's the sketch @benwilson512 and I came up with:
defmodule MyResolver do
@behaviour Absinthe.Resolver
# Specs on how it might work:
@typep resolution_function :: (map, Absinthe.Execution.t -> {:ok, any} | {:error, any})
@typep mod_fun :: {atom, atom}
@spec resolver(any) :: {:ok, mod_fun} | {:ok, resolution_function} | {:error, binary}
@spec transform_error({:error, any}) :: {:error, [binary]}
# Useful for, eg, transform_error({:error, %Ecto.Changeset{} = changeset})
end
Then, eg:
%Absinthe.Type.ObjectType{
fields: fields(
item: [
type: :item,
args: args(id: [type: :id]),
resolve: MyResolver.resolve(:item)
]
)
}
Make sure the following rules from the spec are checked during validation:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.