GithubHelp home page GithubHelp logo

muuki88 / sbt-graphql Goto Github PK

View Code? Open in Web Editor NEW
95.0 6.0 20.0 337 KB

SBT plugin to generate and validate graphql schemas written with Sangria

License: Apache License 2.0

Scala 100.00%
sbt graphql schema-generation schema-validation sbt-plugin

sbt-graphql's Introduction

sbt-graphql Build Status Download

This plugin is sbt 1.x only and experimental.

sbt plugin to generate and validate graphql schemas written with Sangria.

Goals

This plugin is intended for testing pipelines that ensure that your graphql schema and queries are intact and match. You should also be able to compare it with another schema, e.g. the production schema, to avoid breaking changes.

Features

All features are based on the excellent Sangria GraphQL library

Examples for client-side code generation and for schema validation can be found in the test-project directory.

Note: Generating server-side code from a schema is currently not supported. Look here for sangria-based solutions.

Usage

Add this to your plugins.sbt and replace the <version> placeholder with the latest release.

addSbtPlugin("de.mukis" % "sbt-graphql" % "<version>")

In your build.sbt enable the plugins and add sangria. I'm using circe as a parser for my JSON response.

enablePlugins(GraphQLSchemaPlugin, GraphQLQueryPlugin)

libraryDependencies ++= Seq(
  "org.sangria-graphql" %% "sangria" % "1.4.2"
)

Schema generation

The schema is generated by accessing the application code via a generated main class that renders your schema. The main class accesses your code via a small code snippet defined in graphqlSchemaSnippet.

Example: My schema is defined in an object called ProductSchema in a field named schema. In your build.sbt add

graphqlSchemaSnippet := "example.ProductSchema.schema"

Now you can generate a schema with

$ sbt graphqlSchemaGen

You can configure the output directory in your build.sbt with

target in graphqlSchemaGen := target.value / "graphql-build-schema"

Schema definitions

Your build can contain multiple schemas. They are stored in the graphqlSchemas setting. This allows to compare arbitrary schemas, write schema.json files for each of them and validate your queries against them.

There is already one schemas predefined. The build schema is defined by the graphqlSchemaGen task. You can configure the graphqlSchemas label with

name in graphqlSchemaGen := "local-build"

Add a schema

Schemas are defined via a GraphQLSchema case class. You need to define

  • a label. It should be unique and human readable. prod and build already exist
  • a description. Explain where this schema comes from and what it represents
  • a schemaTask. A sbt task that generates the schema

You can also define a schema from a SchemaLoader. This requires defining an anonymous sbt task.

graphqlSchemas += GraphQLSchema(
  "sangria-example",
  "staging schema at http://try.sangria-graphql.org/graphql",
  Def.task(
    GraphQLSchemaLoader
      .fromIntrospection("http://try.sangria-graphql.org/graphql", streams.value.log)
      .loadSchema()
  ).taskValue
)

sbt-graphql provides a helper object GraphQLSchemaLoader to load schemas from different places.

// from a Json file
graphqlProductionSchema := GraphQLSchemaLoader
  .fromFile((resourceManaged in Compile).value / "prod.json")
  .loadSchema()

// from a GraphQL file
graphqlProductionSchema := GraphQLSchemaLoader
  .fromFile((resourceManaged in Compile).value / "prod.graphql")
  .loadSchema()

// from a graphql endpoint via introspection
graphqlProductionSchema := GraphQLSchemaLoader
  .fromIntrospection("http://prod.your-graphql.net/graphql", streams.value.log)
  .withHeaders("X-Api-Version" -> "1", "X-Api-Key" -> "4198ab84-e992-42b0-8742-225ed15a781e")
  .loadSchema()
  
// from a graphql endpoint via introspection with post request
graphqlProductionSchema := GraphQLSchemaLoader
  .fromIntrospection("http://prod.your-graphql.net/graphql", streams.value.log)
  .withPost()
  .loadSchema()

Schema comparison

Sangria provides an API for comparing two Schemas. A change can be breaking or not. The graphqlValidateSchema task compares two given schemas defined in the graphqlSchemas setting.

graphqlValidateSchema <new schema> <old schema>

Example

You can compare the build and prod schema with

$ sbt
> graphqlValidateSchema build prod

Schema rendering

You can render every schema with the graphqlRenderSchema task. In your sbt shell

> graphqlRenderSchema build

This will render the build schema.

You can configure the target directory with

target in graphqlRenderSchema := target.value / "graphql-schema"

Schema release notes

sbt-graphql creates release notes from changes between two schemas. The format is currently markdown.

$ sbt 
> graphqlReleaseNotes <new schema> <old schema>

Example

You can create release notes for the build and prod schema with

$ sbt
> graphqlReleaseNotes build prod

Code Generation

A graphql query result is usually modelled with case classes, enums and traits. Writing these query result classes is tedious and error prone. sbt-graphql can generate the correct models for every graphql query.

A lot of inspiration came from apollo codegen. Make sure to check it out for scalajs, typescript and plain javascript projects.

Configuration

Enable the code generation plugin in your build.sbt

enablePlugins(GraphQLCodegenPlugin)

You need a graphql schema for the code generation. The schema is necessary to figure out the types for each query field. By default, the codegen plugin looks for a schema at src/main/resources/schema.graphql.

We recommend to configure a graphql schema in your graphqlSchemas and use the task to render the schema to a specific file.

// add a 'starwars' schema to the `graphqlSchemas` list
graphqlSchemas += GraphQLSchema(
  "starwars",
  "starwars schema at http://try.sangria-graphql.org/graphql",
  Def.task(
    GraphQLSchemaLoader
      .fromIntrospection("http://try.sangria-graphql.org/graphql", streams.value.log)
      .withHeaders("User-Agent" -> s"sbt-graphql/${version.value}")
      .loadSchema()
  ).taskValue
)

// use this schema for the code generation
graphqlCodegenSchema := graphqlRenderSchema.toTask("starwars").value

The graphqlCodegenSchema requires a File that points to a valid graphql schema file. graphqlRenderSchema is a task that renders any given schema in the graphqlSchemas into a schema file. It takes as input, the unique label that identifies the schema. The toTask("starwars") invocation converts the graphqlRenderSchema input task with the input parameter starwars to a plain task that can be evaluated as usual with .value.

By default, all *.graphql files in your resourceDirectories will be used for code generation.

Settings

You can configure the output in various ways

  • graphqlCodegenStyle - Configure the code output style. Default is Apollo. You can choose between Sangria and Apollo
  • graphqlCodegenSchema - The graphql schema file used for code generation
  • sourceDirectories in graphqlCodegen - List of directories where graphql files should be looked up. Default is sourceDirectory in graphqlCodegen, which defaults to sourceDirectory in Compile / "graphql"
  • includeFilter in graphqlCodegen - Filter graphql files. Default is "*.graphql"
  • excludeFilter in graphqlCodegen - Filter graphql files. Default is HiddenFileFilter || "*.fragment.graphql"
  • graphqlCodegenQueries - Contains all graphql query files. By default this setting contains all files that reside in sourceDirectories in graphqlCodegen and that match the includeFilter / excludeFilter settings.
  • graphqlCodegenPackage - The package where all generated code is placed. Default is graphql.codegen
  • name in graphqlCodegen - Used as a module name in the Sangria code generator.
  • graphqlCodegenJson - Generate JSON encoders/decoders with your graphql query. Default is JsonCodec.None. Note that not all styles support JSON encoder/decoder generation.
  • graphqlCodegenImports: Seq[String] - A list of additional that are included in every generated file
  • graphqlCodegenPreProcessors: Seq[PreProcessor] - A list of preprocessors that can alter the original graphql query before it is being parsed. By default the magic #imports for including fragments are enabled. See the magic #imports section for more details.

JSON support

The common serialization format for graphql results and input variables is JSON. sbt-graphql supports JSON decoder/encoder code generation.

Supported JSON libraries and codegen styles

In your build.sbt you can configure the JSON library with

graphqlCodegenJson := JsonCodec.Circe
// or
graphqlCodegenJson := JsonCodec.PlayJson

Scalar types

The code generation doesn't know about your additional scalar types. sbt-graphql provides a setting graphqlCodegenImports to add an import to every generated query object.

Example:

scalar ZoneDateTime

which is represented as java.time.ZoneDateTime. Add this as an import

graphqlCodegenImports += "java.time.ZoneDateTime"

Code Gen directive

The plugin provides a codeGen directive that is erased during source code generation and can be used to customize the generated code.

Example - skip code generation

To skip the code generation and provide your own type.

query CodeGenHeroNameQuery {
  hero @codeGen(useType: "Hero") {
    name
  }
}

You can also use the fully qualified class name to avoid clashes

query CodeGenHeroNameQuery {
  hero @codeGen(useType: "com.example.model.Hero") {
    name
  }
}

Difference to scalar types

Use the code gen directive, when the code generation doesn't generate the code you need. This can have multiple reasons

  • unsupported feature
  • bug in the code generation
  • code requires additional changes, e.g. extending traits or adding helper methods

The code gen directive is intended to work on object types.

Scalar types represent the core building blocks of your graphql schema like String, Float, Boolean, etc. (DateTime is unfortunately missing until now). For this reasons there's no code generation for those types, which makes the code gen directive obsolete to use.

Magic #imports

This is a feature tries to replicate the apollographql/graphql-tag loader.js feature, which enables including (or actually inlining) partials into a graphql query with magic comments.

Explained

The syntax is straightforward

#import path/to/included.fragment.graphql

The fragment files should be named liked this

<name>.fragment.graphql

There is a excludeFilter in graphqlCodegen, which removes them from code generation so they are just used for inlining and interface generation.

The resolving of paths works like this

  • The path is resolved by checking all sourceDirectories in graphqlCodegen for the given path
  • No relative paths like ./foo.fragment.graphql are supported
  • Imports are resolved recursively. This means you can #import fragments in a fragment.

Example

I have a file CharacterInfo.fragment.graphql which contains only a single fragment

fragment CharacterInfo on Character {
    name
}

And the actual graphql query file

query HeroFragmentQuery {
  hero {
    ...CharacterInfo
  }
  human(id: "Lea") {
    homePlanet
    ...CharacterInfo
  }
}

#import fragments/CharacterInfo.fragment.graphql

Codegen style Apollo

As the name suggests the output is similar to the one in apollo codegen.

A basic GraphQLQuery trait is generated, which all queries extend.

trait GraphQLQuery {
  type Document
  type Variables
  type Data
}

The Document contains the query document parsed with sangria. The Variables type represents the input variables for the particular query. The Data type represents the shape of the query result.

For each query a new object is created with the name of the query.

Example:

query HeroNameQuery {
  hero {
    name
  }
}

Generated code:

package graphql.codegen
import graphql.codegen.GraphQLQuery
import sangria.macros._
object HeroNameQuery {
  object HeroNameQuery extends GraphQLQuery {
    val document: sangria.ast.Document = graphql"""query HeroNameQuery {
  hero {
    name
  }
}"""
    case class Variables()
    case class Data(hero: Hero)
    case class Hero(name: Option[String])
  }
}

Interfaces, types and aliases

The ApolloSourceGenerator generates an additional file Interfaces.scala with the following shape:

object types {
   // contains all defined types like enums and aliases
}
// all used fragments and interfaces are generated as traits here
Use case

Share common business logic around a fragment that shouldn't be a directive

You can now do this by defining a fragment and include it in every query that requires to apply this logic. sbt-graphql will generate the common trait?, all generated case classes will extend this fragment trait.

Limitations

You need to copy the fragments into every graphql query that should use it. If you have a lot of queries that reuse the fragment and you want to apply changes, this is cumbersome.

You cannot nest fragments. The code generation isn't capable of naming the nested data structure. This means that you need create fragments for every nesting.

Invalid

query HeroNestedFragmentQuery {
  hero {
    ...CharacterInfo
  }
  human(id: "Lea") {
    ...CharacterInfo
  }
}

# This will generate code that may compile, but is not usable
fragment CharacterInfo on Character {
    name
     friends {
        name
    }
}

correct

query HeroNestedFragmentQuery {
  hero {
    ...CharacterInfo
  }
  human(id: "Lea") {
    ...CharacterInfo
  }
}

# create a fragment for the nested query
fragment CharacterFriends on Character {
    name
}

fragment CharacterInfo on Character {
    name
    friends {
        ...CharacterFriends
    }
}

Codegen Style Sangria

This style generates one object with a specified moduleName and puts everything in there.

Example:

query HeroNameQuery {
  hero {
    name
  }
}

Generated code:

object HeroNameQueryApi {
  case class HeroNameQuery(hero: HeroNameQueryApi.HeroNameQuery.Hero)
  object HeroNameQuery {
    case class HeroNameQueryVariables()
    case class Hero(name: Option[String])
  }
}

Query validation

The query validation uses the schema generated with graphqlSchemaGen to validate against all graphql queries defined under src/main/graphql. Using separated graphql files for queries is inspired by apollo codegen which generates typings for various languages.

To validate your graphql files run

sbt graphqlValidateQueries

You can change the source directory for your graphql queries with this line in your build.sbt

sourceDirectory in (Test, graphqlValidateQueries) := file("path/to/graphql")

Developing

Test project

You can try out your changes immediately with the test-project:

$ cd test-project
sbt

If you change code in the plugin you need to reload the test-project.

Releasing

Push a tag vX.Y.Z a travis will automatically release it. If you push to the snapshot branch a snapshot version (using the git sha) will be published.

The git.baseVersion := "x.y.z" setting configures the base version for snapshot releases.

sbt-graphql's People

Contributors

bc-dima-pasieka avatar benwaffle avatar ches avatar felixbr avatar fspillner avatar jonas avatar krever avatar muuki88 avatar ngbinh avatar nickhudkins avatar nicolasrouquette avatar sullis avatar swsnr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

sbt-graphql's Issues

Scripted tests

It would be nice to have scripted tests for the plugin. I'm not currently sure how to manually test things using the test-project.

Code generation for unions is flawed

I'll just document this here, sadly I don't have the time to fix it right now.

Version used: 0.16.4

Given the following query:

query Test {
    content {
      __typename

      ... on Foo {
        slug
      }

      ... on Bar {        
        id
        foo {          
          slug
        }
      }
    }
}    

The generated code looks like this (only the relevant part):

object Content {
  case class Foo(__typename: String, slug: String) extends Content
  object Foo {
    implicit val jsonDecoder: Decoder[Foo] = deriveDecoder[Foo]
    implicit val jsonEncoder: Encoder[Foo] = deriveEncoder[Foo]
  }

  case class Bar(__typename: String, id: Int, foo: Content.Foo) extends Content
  object Bar {
    case class Foo(__typename: String, slug: String)
    object Foo {
      implicit val jsonDecoder: Decoder[Foo] = deriveDecoder[Foo]
      implicit val jsonEncoder: Encoder[Foo] = deriveEncoder[Foo]
    }
    implicit val jsonDecoder: Decoder[Bar] = deriveDecoder[Bar]
    implicit val jsonEncoder: Encoder[Bar] = deriveEncoder[Bar]
  }

  implicit val jsonDecoder: Decoder[Content] = for {
    typeDiscriminator <- Decoder[String].prepare(_.downField("__typename"))
    value <- typeDiscriminator match {
      case "Foo" =>
        Decoder[Foo]
      case "Bar" =>
        Decoder[Bar]
      case other =>
        Decoder.failedWithMessage("invalid type: " + other)
    }
  } yield value
  implicit val jsonEncoder: Encoder[Content] = Encoder.instance[Content]({
    case v: Foo =>
      deriveEncoder[Foo].apply(v)
    case v: Bar =>
      deriveEncoder[Bar].apply(v)
  })
}

There are two bugs here:

  1. Foo and Bar have case class fields for __typename, which means that we have to request __typename in the query for both of them or the decoding will fail for no good reason. It would make more sense to generate constants, since we already know the value anyway and it never changes.
// Instead of
case class Foo(__typename: String, slug: String)

// we should generate
case class Foo(slug: String) {
  def __typename: String = "Foo"
}

// or maybe even
case class Foo(slug: String) {
  def __typename: String = Foo.__typename
}
object Foo {
  def __typename: String = "Foo"
}
  1. Inside of Bar we reference Foo for the second union case in the query. In the generated code we have a dedicated Bar.Foo generated but for some dumb reason, we use Content.Foo in Bar and Bar.Foo is unused. This means the whole thing doesn't work if the two cases use different fields from Foo.
    I'm not sure if there is a reason for this (fragments?) or if it's just an honest mistake.

  2. Not really a bug but strange: The implementation for Encoder[Content] derives encoders inline instead of using the existing ones. The equivalent Decoder doesn't do this. Imo we shouldn't derive something inline, especially if it already exists.

Code generation - The output directory for generated files should be configurable.

Hi,

Currently there is not way to configure the output directory for generated files. They are generated by default in the target directory.

As the generated files are source files, so putting them in target doesn't sound very right. Therefore, I think it would be very helpful to set output folder in plugin to be able put them somewhere under src directory.

Please share your thoughts.

Thanks
Lakha

Question about code generation issue in test phase

Hi @muuki88. Thank you for the nice plugin!

I'm using pretty default sbt-graphql configuration in Play project. Ran into issue with code generation, which happens only in test phase with error similar to: Symbol 'type graphql.codegen.types.myInputType' is missing from the classpath.
I think this may be related to the different generated Interfaces.scala files in /main and in /test

/* ../web/target/scala-2.13/src_managed/main/sbt-graphql/Interfaces.scala */
package graphql.codegen
import io.circe.{ Decoder, Encoder }
import io.circe.generic.semiauto.{ deriveDecoder, deriveEncoder }
object types {
  case class myInputType(questionId: Option[String], userAnswers: List[String])
  case object myInputType {
    implicit val jsonDecoder: Decoder[myInputType] = deriveDecoder[myInputType]
    implicit val jsonEncoder: Encoder[myInputType] = deriveEncoder[myInputType]
  }
}

Classes are missed in file generated in /test

/* ../web/target/scala-2.13/src_managed/test/sbt-graphql/Interfaces.scala*/
package graphql.codegen
import io.circe.{ Decoder, Encoder }
import io.circe.generic.semiauto.{ deriveDecoder, deriveEncoder }
object types

Any help would be greatly appreciated!

Schema generation from Introspection

Hi Guys,

I am trying to generate a schema file which is a combination of Local Schema and Results of an Introspection:

graphqlSchemaSnippet := "demo.StaticSchema.schema"

target in graphqlSchemaGen := target.value / "graphql"

Above config generates, .schema file from my Local Schema. I also want to generate a schema file from my introspection result

graphqlSchemas += GraphQLSchema(
  "Remote",
  "Remote Schema",
  Def.task(
    GraphQLSchemaLoader
      .fromIntrospection("http://localhost:8081/graphql", streams.value.log)
      .loadSchema()
  ).taskValue
)

But, after running sbt graphqlSchemaGen i don't see any intropection schema? I guess i am missing some piece and not able to figure out from documentation.

Any help would be appreciated. Thanks

Only regenerate Scala files from GraphQL schema if schema changes

In our codebase I observed that the generated files are often recompiled without explicitly cleaning them or changing the schema. This isn't a big deal for small schemas but in our case it's over 400 files that are being recompiled and takes several seconds.

I don't have a repro-case yet but it should be fairly straight forward. I can imagine using one of the test/example projects and checking how often the files are being regenerated (it should only happen if the schema changed or after an explicit clean command).

sbt already has facilities to avoid unnecessary work (e.g. used to avoid compiling files that haven't changed), so we should use that here as well imo.

References:
https://www.scala-sbt.org/1.x/docs/Caching.html
https://www.scala-sbt.org/1.x/docs/Cached-Resolution.html

Generate circe Encoder for query Variables

Hey,

currently only Decoder instances for the returned graphql result are generated. But the input variables of the query are usually sent via json to graphql as well.

It would be beneficial to add the Encoder derivation for the Variables case class.

I'm not intimately familiar with the graphql spec, so I'm not sure whether input variables can be nested, but if not deriving the Encoder for the top-level Variables case class might be enough.

Add setting for additional imports in generated code

The generated code may require additional imports for

  • semiderived json codecs
  • custom scalar types #26

For this reason a user should be able to specify additional imports. An API could look like this.

In your build.sbt

graphqlCodegenImports += "com.example.time.codecs._"

The additional imports should be available in the codegen context.

Not available on maven?

My sbt build fails to find addSbtPlugin("rocks.muki" % "sbt-graphql" % "0.10.1") although it used to work before.

  • Did someone remove the rocks.muki groupId?
  • Is it just me having trouble getting it to show up in search too?
  • Or are plugins kept and perhaps resolved somewhere else? Not an SBT dev ;)

Additional imports are quoted and therefore broken

With the new 0.10.1 release I have a problem with "additional imports". The imports are still inserted into the generated sources, but now have back-ticks that aren't correct:

Additional imports:

graphqlCodegenImports ++= Seq(
  "java.time._",
  "io.circe.java8.time._",
),

Compiler complains:

[error] SomeGeneratedSourceFile.scala:2:8: not found: object java.time
[error] import `java.time`._
[error]        ^
[error] SomeGeneratedSourceFile.scala:3:8: not found: object io.circe.java8.time
[error] import `io.circe.java8.time`._
[error]        ^

My guess is that scalameta thinks that java.time is one big identifier instead of a package path and therefore quotes it.

How to reorder execution of sbt-graphql plugins?

Hi Guys, I am trying to achieve following goals:

  1. Generate a .schema file from my Schema Definition
  2. Validate queries using .schema file generated in 1st step
  3. Generate code for the queries using .schema file generated in 1st step

I am able to achieve first 2 goals via following configuration in build.sbt:

// generate a schema file
graphqlSchemaSnippet := "root.SchemaDefinition.StarWarsSchema"
target in graphqlSchemaGen := new File("src/main/graphql/schema")

// Validating queries
sourceDirectory in (Test, graphqlValidateQueries) := new File("src/main/graphql/queries")

enablePlugins(GraphQLSchemaPlugin, GraphQLQueryPlugin)

and when i run sbt graphqlValidateQueries, You can see from logs that it first creates a schema file and then validates my queries against that schema file.

[info] Running graphql.SchemaGen /Users/gschambial/git/sangria-akka-http-example/src/main/graphql/schema/schema.graphql
[info] Generating schema in src/main/graphql/schema/schema.graphql
[info] Checking graphql files in src/main/graphql/queries
[info] Validate src/main/graphql/queries/human.graphql
[success] All 1 graphql files are valid
[success] Total time: 6 s, completed Apr 22, 2020 10:50:49 AM

Now to achieve goal no. 3, when I enable GraphQLCodegenPlugin I start getting errors:

// generate a schema file
graphqlSchemaSnippet := "root.SchemaDefinition.StarWarsSchema"
target in graphqlSchemaGen := new File("src/main/graphql/schema")

// Validating queries
sourceDirectory in (Test, graphqlValidateQueries) := new File("src/main/graphql/queries")

// telling Codegen plugin to use this schema file
graphqlCodegenSchema := new File("src/main/graphql/schema/schema.graphql")

enablePlugins(GraphQLSchemaPlugin, GraphQLQueryPlugin, GraphQLCodegenPlugin)

I am expecting the order of execution to be:

  1. Generate Schema
  2. Validate Queries
  3. Generate code for queries

But, no matter which command I trigger sbt graphqlSchemaGen sbt graphqlValidateQueries or sbt graphqlCodegen, following order is executed:

  1. Generate code for queries
  2. Generate schema
  3. Validate schema
[info] Set current project to sangria-akka-http-example (in build file:/Users/gschambial/git/sangria-akka-http-example/)
[info] Generate code for 1 queries
[info] Use schema src/main/graphql/schema/schema.graphql for query validation
[error] java.io.FileNotFoundException: src/main/graphql/schema/schema.graphql (No such file or directory)
[error] (Compile / graphqlCodegen) java.io.FileNotFoundException: src/main/graphql/schema/schema.graphql (No such file or directory)

As, schema is not generated before step 1 is executed it fails. Is there a way to reorder this execution? Apologies, if it is some noob sbt issue.

Any help would be highly appreciated.

Trouble getting codegen to work

Hey guys, super new to graphql/sangria, so I apologize in advance if my issue is misplaced.

I am attempting to generate my result classes from a given src/main/resources/schema.graphql file containing the following lines:

type Query {
  transaction: Transaction!
}

type Transaction {
  name: String
  date: String
  id: ID!
}

My project/plugins.sbt file looks like this:

addSbtPlugin("io.spray" % "sbt-revolver" % "0.9.1")
addSbtPlugin("com.heroku" % "sbt-heroku" % "2.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.1")
addSbtPlugin("rocks.muki" % "sbt-graphql" % "0.5.0")

and build.sbt looks like this:

import rocks.muki.graphql.schema.SchemaLoader

name := "test"
version := "0.0.1"

scalaVersion := "2.12.4"
scalacOptions ++= Seq("-deprecation", "-feature")

libraryDependencies ++= Seq(
  "org.sangria-graphql" %% "sangria" % "1.3.0",
  "org.sangria-graphql" %% "sangria-circe" % "1.1.0",
  "org.sangria-graphql" %% "sangria-relay" % "1.4.1",

  "com.typesafe.akka" %% "akka-http" % "10.1.0",
  "de.heikoseeberger" %% "akka-http-circe" % "1.20.0",

  "io.circe" %%	"circe-core" % "0.9.2",
  "io.circe" %% "circe-parser" % "0.9.2",
  "io.circe" %% "circe-optics" % "0.9.2",
  "com.github.nscala-time" %% "nscala-time" % "2.18.0",

  "org.scalatest" %% "scalatest" % "3.0.5" % Test
)

Revolver.settings
enablePlugins(JavaAppPackaging)
enablePlugins(GraphQLSchemaPlugin, GraphQLQueryPlugin)
enablePlugins(GraphQLCodegenPlugin)

graphqlCodegenStyle := Sangria

val inputSchema = new File("src/main/resources/schema.graphql")
graphqlSchemas += GraphQLSchema(
  "transactions",
  "transactions schema",
  Def.task(
    SchemaLoader.fromFile(inputSchema).loadSchema()
  ).taskValue
)

graphqlCodegenSchema := graphqlRenderSchema.toTask("transactions").value

When I run

sbt graphqlCodegen the operation succeeds and creates a bunch of new files inside of the target/ folder. Here is the success message:

Harrisons-MacBook-Pro:coinecta-graphql-server harrisonwang$ sbt graphqlCodegen
[info] Loading settings from idea.sbt ...
[info] Loading global plugins from /Users/harrisonwang/.sbt/1.0/plugins
[info] Loading settings from assembly.sbt,plugins.sbt ...
[info] Loading project definition from /Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/project
[info] Loading settings from build.sbt ...
[info] Set current project to test (in build file:/Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/)
[info] Rendering schema to: /Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/target/graphql/transactions.graphql
[info] Generate code for 1 queries
[info] Use schema /Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/target/graphql/transactions.graphql for query validation
[success] Total time: 1 s, completed May 18, 2018 11:26:01 PM

Unfortunately, none of these files (to my knowledge) contain the desired generated code; by reading the docs, I expected the result classes to live in a package called graphql.codegen. The file target/scala-2.12/classes/graphql/codegen/GraphQLCodegen.class has the correct package name, but the file itself is empty:

package graphql.codegen
object GraphQLCodegen extends scala.AnyRef {
}

I think the command is accessing the correct schema.graphql file, because it creates a new file target/graphql/transactions.graphql which contains the same schema, so I am not sure where my mistake in the process is.

If anyone could give me any tips/pointers on how to get the code generation working properly, it would be much appreciated. Sorry for the lengthy post, but I wanted to provide as much context as possible. Thanks a bunch!

codegen not generating custom scalar types.

Hi,

I might be missing something, but I have the following issue.

I'm fetching the schema from a remote server and trying to generate case classes for a query.

in my build.sbt I have:

graphqlSchemas += GraphQLSchema(
  "staging",
  "staging schema at https://api.acme.com/graphql",
  Def.task(
    GraphQLSchemaLoader
      .fromIntrospection("https://api.acme.com/graphql", streams.value.log)
      .loadSchema()
  ).taskValue
)

graphqlCodegenSchema := graphqlRenderSchema.toTask("staging").value

This schema has some scalars defined

e.g.

# This represents a json encoded into a String.
scalar JsonString

which is then used in types

type MyAwesomeType implements Node {
   [...]
   meta: JsonString!
   [...]
}

code generation works, but compilation fails.

In the generated code, it appears as:

case class MyAwesomeInput(something: String, ..., meta: JsonString)

which will fail at compilation step, because JsonString isn't defined:

[error] /.../acme/target/scala-2.12/src_managed/sbt-graphql/AcmeMutation.scala:27:135: not found: type JsonString
[error]   case class MyAwesomeType(something: String, ..., meta: JsonString)

Interestingly, ID is defined as type ID = String. I guess it's a built in GraphQL type, since I don't see it defined in the schema anywhere explicitly.

I suspect the same might apply to enums defined in the schema.

Code generation question

Hi @muuki88, thanks for providing this!

I had a question about getting started. In my build.sbt I added enablePlugins(GraphQLCodegenPlugin) and nothing else related to the plugin because I have a file src/main/resources/schema.graphql, which I believe is the default value. I ran sbt graphqlCodegenSchema, but nothing happened. I didn't understand if anything else was required. Am I missing something?

Examples for common use-cases

While the documentation in the readme is generally good, the plugin now supports multiple independent features, so it's harder to understand the big picture. Recent issues (#83, #84, #85) are all at least in part about misunderstanding/confusing the plugin features.

Imo it would be nice to have examples for common use-cases like:

  • I want to connect to an existing GraphQL-api and want to generate the Scala-types from my queries
  • I'm writing a GraphQL-api (using sangria) and want to render the schema as a file
  • I have existing queries and a schema and just want to validate they match via sbt
  • I have a repository with both server/client, how do I use the plugin with that

I think we could add a new folder examples and put a minimal project for each use-case there and link them from the readme.

(Maybe we could even sneak in an example of our BreakingChangeSuite, as the topic comes up sometimes and it would be nice to have something to link to)

@muuki88 WDYT?

Stable ordering of output

Hey, thanks for doing this project!

We have used it to secure stability of our API and detect changes to the API in PRs at Humio.

I was wondering if it would be possible to make the ordering of the entries in the generated schema stable.

Currently if you change the order of fields in Sangria, it will also change in the output.

We can sort the in code, but it would be useful in the plugin, so reordering does not trigger a false positive for out "API change" check.

Add `document` to GraphQLQuery trait

The GraphQLQuery trait only contains a type Document, but not a def document: Document to access the actual content. This makes accessing it rather tricky.

jawn parser library incompatibility

With sbt 1.2.8, I'm running into a library conflict with jawn parser similar to this:
https://stackoverflow.com/questions/47227060/ammonite-classpath-clashes-with-github4s-java-lang-abstractmethoderror

Except for different URLs, the build.sbt is similar to the doc:

graphqlSchemas += GraphQLSchema(
  "sangria-example",
  "staging schema at http://try.sangria-graphql.org/graphql",
  Def.task(
    GraphQLSchemaLoader
      .fromIntrospection("http://try.sangria-graphql.org/graphql", streams.value.log)
      .loadSchema()
  ).taskValue
)

The stack trace is this:

[error] java.lang.AbstractMethodError
[error] 	at jawn.CharBasedParser.parseString(CharBasedParser.scala:90)
[error] 	at jawn.CharBasedParser.parseString$(CharBasedParser.scala:87)
[error] 	at jawn.StringParser.parseString(StringParser.scala:15)
[error] 	at jawn.Parser.rparse(Parser.scala:428)
[error] 	at jawn.Parser.parse(Parser.scala:338)
[error] 	at jawn.SyncParser.parse(SyncParser.scala:24)
[error] 	at jawn.SupportParser.$anonfun$parseFromString$1(SupportParser.scala:15)
[error] 	at jawn.SupportParser.parseFromString(SupportParser.scala:15)
[error] 	at jawn.SupportParser.parseFromString$(SupportParser.scala:14)
[error] 	at io.circe.jawn.CirceSupportParser$.parseFromString(CirceSupportParser.scala:7)
[error] 	at io.circe.jawn.JawnParser.parse(JawnParser.scala:16)
[error] 	at io.circe.parser.package$.parse(package.scala:8)
[error] 	at rocks.muki.graphql.schema.IntrospectSchemaLoader.introspect(SchemaLoader.scala:118)
[error] 	at rocks.muki.graphql.schema.IntrospectSchemaLoader.loadSchema(SchemaLoader.scala:85)

In Intellij, I see that under External Libraries, there is an entry for sbt: sbt-and-plugins.
In there, there are indeed two versions of jawn parser:

jawn-parser_2.12-0.10.4.jar (from SBT itself)
jawn-parser_2.12-0.11.1.jar (from circe-jawn)

It is unclear how to resolve this library conflict.

[question] Can I annotate generated classes?

The generated code keeps producing spurious warnings on my project, which is rather annoying. I'd like to annotate them with @silent from the silencer plugin. Is there any way I can do that?

Generate code from queries

Similar to apollo-codegen it would be nice to generate domain classes from queries against a schema.

A lot of this has been prototyped in sangria-codegen, however, the code generator should ideally use Sangria's AST directly instead of an intermediate representation based on an initial pass.

Open questions:

  • Is it OK to use scala.meta for the code generation itself?
  • Is generation of traits from interfaces a "must have"? In the sangria-codegen project they were temporarily disabled to ensure progress. (see mediative/sangria-codegen#12)

Fix .scalafmt.conf

The name of the scalafmt config file is wrong. It should be .scalafmt.conf but is currently .scalafmt.

This means all scalafmt tools uses their default, which is sometimes not 100% the same. For example my IntelliJ scalafmt-plugin does something different than the sbt plugin used in this project.

Since fixing this would require a one-time reformatting of all code, I recommend doing this only if there are no open branches/PRs.

It is probably a good idea to fix this in the "test-project" as well.

Make graphqlValidateQueries setting more configurable

The graphqlValidateQueries should work like the graphqlCodegenQueries in the CodegenPlugin and use the include and exclude filter settings.

    resourceDirectories in graphqlValidateQueries := (resourceDirectories in Compile).value,
    includeFilter in graphqlCodegen := "*.graphql",
    excludeFilter in graphqlCodegen := HiddenFileFilter,
    graphqlCodegenQueries := Defaults
      .collectFiles(resourceDirectories in graphqlValidateQueries,
                    includeFilter in graphqlValidateQueries,
                    excludeFilter in graphqlValidateQueries)
      .value

Fix generation of interfaces

The scalameta source generator currently has an emitInterfaces flag which allows to generate a Scala trait for each GraphQL fragment. The initial goal was to make it easier to use the generated code by coding against "interfaces" rather than the "unique" result classes that is currently generated.

The code was disabled in mediative/sangria-codegen#11 since it still requires some work to support fragment with nested selections, e.g.:

fragment ArticleWithAuthorFragment on Article {
  title
  author {
    ...IdFragment
    ...AuthorFragment
  }
}

Alternatively if this is not actually useful for users, the related code could just be removed.

Generate code from schemas

It would be very useful to be able to leverage code generation to make case classes for a Sangria server with an SDL/IDL schema (within a .graphql file, Sangria calls this schema materialization).

Is that a feasible feature addition based on the code generation code being added to this project?

about graphql-Java

Excuse me, we use graphql-java a lot,can this plugin support graphql-Java with A PR? Although I don't know if graphql-Java can provider parser or validation

Yet another trouble getting codegen to work

Hello,

I'm having a world of trouble trying to generate scala code from a graphql schema at the moment.
So basically, my company runs a Python based graphql server and what I'm trying to do is to download the schema from it and generate Scala code using sbt-graphql.

I was able to get the schema by using graphql-cli. (schema.graphql)
It's a single file with all types and enums included.
Could you please help me learn what to do next?
In other posts I read that queries should be in separate .graphql files, but how would I separate this schema file to those?
If I just follow the README here, "sbt graphqlCodegenSchema" generates files but I don't see any that are related to my schema.

Please help...

Fragment code generation is broken for union types

The code generation for fragments produced invalid / not compilable scala code.

Example schema and query

A simple schema with a union type

union Animal = Cat | Dog
type Cat {
  catName: String!
}
type Dog {
  dogName: String!
}

Query {
  animals: [Animal!]!
}

The animal fragments

fragment AnimalName on Animal {
  ...DogName
  ...CatName
}

fragment DogName on Dog {
  name
}

fragment CatName on Cat {
  name
}

And use the fragment

# import fragements/animalName.fragment.graphql
query AllAnimals {
  animals {
    ...AnimalName
  }
}

Expected code

The fragment and query should generate the following types in the Interfaces.scala

trait AnimalName
trait DogName extends AnimalName {
   def name: String
}
trait CatName extends AnimalName {
  def name: String
}

And the query object code should use these types

object GetAnimals {
   object GetAnimals extends GraphQLQuery {
      val document: sangria.ast.Document = graphql"""..."""
      case class Variables()
      case  class Data(animals: List[AnimalName])
      case class DogName(name: String) extends DogName
      case class CatName(name: String) extends CatName
   }
}

Complexities

  • We need to make sure that we reference the correct trait in the generated query code. In the Example DogName is defined in the Interfaces.scala and in the GetAnimals$GetAnimals object
  • The circe code generation needs to put the decoder / encoder above the data encoder so it can find it during derivation

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.