muuki88 / sbt-graphql Goto Github PK
View Code? Open in Web Editor NEWSBT plugin to generate and validate graphql schemas written with Sangria
License: Apache License 2.0
SBT plugin to generate and validate graphql schemas written with Sangria
License: Apache License 2.0
While the documentation in the readme is generally good, the plugin now supports multiple independent features, so it's harder to understand the big picture. Recent issues (#83, #84, #85) are all at least in part about misunderstanding/confusing the plugin features.
Imo it would be nice to have examples for common use-cases like:
sangria
) and want to render the schema as a filesbt
I think we could add a new folder examples
and put a minimal project for each use-case there and link them from the readme.
(Maybe we could even sneak in an example of our BreakingChangeSuite
, as the topic comes up sometimes and it would be nice to have something to link to)
@muuki88 WDYT?
It would be very useful to be able to leverage code generation to make case classes for a Sangria server with an SDL/IDL schema (within a .graphql file, Sangria calls this schema materialization).
Is that a feasible feature addition based on the code generation code being added to this project?
With sbt 1.2.8, I'm running into a library conflict with jawn parser similar to this:
https://stackoverflow.com/questions/47227060/ammonite-classpath-clashes-with-github4s-java-lang-abstractmethoderror
Except for different URLs, the build.sbt
is similar to the doc:
graphqlSchemas += GraphQLSchema(
"sangria-example",
"staging schema at http://try.sangria-graphql.org/graphql",
Def.task(
GraphQLSchemaLoader
.fromIntrospection("http://try.sangria-graphql.org/graphql", streams.value.log)
.loadSchema()
).taskValue
)
The stack trace is this:
[error] java.lang.AbstractMethodError
[error] at jawn.CharBasedParser.parseString(CharBasedParser.scala:90)
[error] at jawn.CharBasedParser.parseString$(CharBasedParser.scala:87)
[error] at jawn.StringParser.parseString(StringParser.scala:15)
[error] at jawn.Parser.rparse(Parser.scala:428)
[error] at jawn.Parser.parse(Parser.scala:338)
[error] at jawn.SyncParser.parse(SyncParser.scala:24)
[error] at jawn.SupportParser.$anonfun$parseFromString$1(SupportParser.scala:15)
[error] at jawn.SupportParser.parseFromString(SupportParser.scala:15)
[error] at jawn.SupportParser.parseFromString$(SupportParser.scala:14)
[error] at io.circe.jawn.CirceSupportParser$.parseFromString(CirceSupportParser.scala:7)
[error] at io.circe.jawn.JawnParser.parse(JawnParser.scala:16)
[error] at io.circe.parser.package$.parse(package.scala:8)
[error] at rocks.muki.graphql.schema.IntrospectSchemaLoader.introspect(SchemaLoader.scala:118)
[error] at rocks.muki.graphql.schema.IntrospectSchemaLoader.loadSchema(SchemaLoader.scala:85)
In Intellij, I see that under External Libraries
, there is an entry for sbt: sbt-and-plugins
.
In there, there are indeed two versions of jawn parser:
jawn-parser_2.12-0.10.4.jar
(from SBT itself)
jawn-parser_2.12-0.11.1.jar
(from circe-jawn
)
It is unclear how to resolve this library conflict.
The GraphQLQuery
trait only contains a type Document
, but not a def document: Document
to access the actual content. This makes accessing it rather tricky.
We have currently no tests for enum
types. It seems that decoders aren't generated.
Hi,
I need to maintain two seperate graphql schemas (public and private) in the same project, is it possible to define multiple graphqlSchemaSnippet ?
Thanks!
Hi Guys, I am trying to achieve following goals:
.schema
file from my Schema Definitionqueries
using .schema
file generated in 1st stepcode
for the queries
using .schema
file generated in 1st stepI am able to achieve first 2 goals via following configuration in build.sbt
:
// generate a schema file
graphqlSchemaSnippet := "root.SchemaDefinition.StarWarsSchema"
target in graphqlSchemaGen := new File("src/main/graphql/schema")
// Validating queries
sourceDirectory in (Test, graphqlValidateQueries) := new File("src/main/graphql/queries")
enablePlugins(GraphQLSchemaPlugin, GraphQLQueryPlugin)
and when i run sbt graphqlValidateQueries
, You can see from logs that it first creates a schema file and then validates my queries against that schema file.
[info] Running graphql.SchemaGen /Users/gschambial/git/sangria-akka-http-example/src/main/graphql/schema/schema.graphql
[info] Generating schema in src/main/graphql/schema/schema.graphql
[info] Checking graphql files in src/main/graphql/queries
[info] Validate src/main/graphql/queries/human.graphql
[success] All 1 graphql files are valid
[success] Total time: 6 s, completed Apr 22, 2020 10:50:49 AM
Now to achieve goal no. 3
, when I enable GraphQLCodegenPlugin
I start getting errors:
// generate a schema file
graphqlSchemaSnippet := "root.SchemaDefinition.StarWarsSchema"
target in graphqlSchemaGen := new File("src/main/graphql/schema")
// Validating queries
sourceDirectory in (Test, graphqlValidateQueries) := new File("src/main/graphql/queries")
// telling Codegen plugin to use this schema file
graphqlCodegenSchema := new File("src/main/graphql/schema/schema.graphql")
enablePlugins(GraphQLSchemaPlugin, GraphQLQueryPlugin, GraphQLCodegenPlugin)
I am expecting the order of execution to be:
But, no matter which command I trigger sbt graphqlSchemaGen
sbt graphqlValidateQueries
or sbt graphqlCodegen
, following order is executed:
[info] Set current project to sangria-akka-http-example (in build file:/Users/gschambial/git/sangria-akka-http-example/)
[info] Generate code for 1 queries
[info] Use schema src/main/graphql/schema/schema.graphql for query validation
[error] java.io.FileNotFoundException: src/main/graphql/schema/schema.graphql (No such file or directory)
[error] (Compile / graphqlCodegen) java.io.FileNotFoundException: src/main/graphql/schema/schema.graphql (No such file or directory)
As, schema is not generated before step 1
is executed it fails. Is there a way to reorder this execution? Apologies, if it is some noob sbt issue.
Any help would be highly appreciated.
The scalameta source generator currently has an emitInterfaces
flag which allows to generate a Scala trait
for each GraphQL fragment. The initial goal was to make it easier to use the generated code by coding against "interfaces" rather than the "unique" result classes that is currently generated.
The code was disabled in mediative/sangria-codegen#11 since it still requires some work to support fragment with nested selections, e.g.:
fragment ArticleWithAuthorFragment on Article {
title
author {
...IdFragment
...AuthorFragment
}
}
Alternatively if this is not actually useful for users, the related code could just be removed.
The GraphQL config protocol is an attempt at standardizing a .graphqlconfig
file for specifying schema paths, end points etc.
See:
Not sure if this fits within the goal of sbt-graphql and what is the current status of the config protocol.
The graphqlValidateQueries
should work like the graphqlCodegenQueries
in the CodegenPlugin
and use the include and exclude filter settings.
resourceDirectories in graphqlValidateQueries := (resourceDirectories in Compile).value,
includeFilter in graphqlCodegen := "*.graphql",
excludeFilter in graphqlCodegen := HiddenFileFilter,
graphqlCodegenQueries := Defaults
.collectFiles(resourceDirectories in graphqlValidateQueries,
includeFilter in graphqlValidateQueries,
excludeFilter in graphqlValidateQueries)
.value
I think https://github.com/muuki88/sbt-graphql/releases/tag/v0.1.6 should have been 0.16.0
. Also the release title (v1.6.0) is not right. Maybe cut a new version to avoid confusion?
The generated code keeps producing spurious warnings on my project, which is rather annoying. I'd like to annotate them with @silent
from the silencer plugin. Is there any way I can do that?
In our codebase I observed that the generated files are often recompiled without explicitly cleaning them or changing the schema. This isn't a big deal for small schemas but in our case it's over 400 files that are being recompiled and takes several seconds.
I don't have a repro-case yet but it should be fairly straight forward. I can imagine using one of the test/example projects and checking how often the files are being regenerated (it should only happen if the schema changed or after an explicit clean
command).
sbt
already has facilities to avoid unnecessary work (e.g. used to avoid compiling files that haven't changed), so we should use that here as well imo.
References:
https://www.scala-sbt.org/1.x/docs/Caching.html
https://www.scala-sbt.org/1.x/docs/Cached-Resolution.html
It would be nice to have scripted tests for the plugin. I'm not currently sure how to manually test things using the test-project.
It would be great if we can do so. At the moment, I am not sure how to achieve that.
Hello,
I'm having a world of trouble trying to generate scala code from a graphql schema at the moment.
So basically, my company runs a Python based graphql server and what I'm trying to do is to download the schema from it and generate Scala code using sbt-graphql.
I was able to get the schema by using graphql-cli. (schema.graphql)
It's a single file with all types and enums included.
Could you please help me learn what to do next?
In other posts I read that queries should be in separate .graphql files, but how would I separate this schema file to those?
If I just follow the README here, "sbt graphqlCodegenSchema" generates files but I don't see any that are related to my schema.
Please help...
My sbt build fails to find addSbtPlugin("rocks.muki" % "sbt-graphql" % "0.10.1")
although it used to work before.
The validation should only be available in the Compile
scope.
If the IntegrationTest
is request we can add it later.
The goal is for schema comments to appear in the scaladoc of the generated code.
As mentioned here: #15 (comment)
Hi,
I might be missing something, but I have the following issue.
I'm fetching the schema from a remote server and trying to generate case classes for a query.
in my build.sbt I have:
graphqlSchemas += GraphQLSchema(
"staging",
"staging schema at https://api.acme.com/graphql",
Def.task(
GraphQLSchemaLoader
.fromIntrospection("https://api.acme.com/graphql", streams.value.log)
.loadSchema()
).taskValue
)
graphqlCodegenSchema := graphqlRenderSchema.toTask("staging").value
This schema has some scalars defined
e.g.
# This represents a json encoded into a String.
scalar JsonString
which is then used in types
type MyAwesomeType implements Node {
[...]
meta: JsonString!
[...]
}
code generation works, but compilation fails.
In the generated code, it appears as:
case class MyAwesomeInput(something: String, ..., meta: JsonString)
which will fail at compilation step, because JsonString
isn't defined:
[error] /.../acme/target/scala-2.12/src_managed/sbt-graphql/AcmeMutation.scala:27:135: not found: type JsonString
[error] case class MyAwesomeType(something: String, ..., meta: JsonString)
Interestingly, ID
is defined as type ID = String
. I guess it's a built in GraphQL type, since I don't see it defined in the schema anywhere explicitly.
I suspect the same might apply to enums defined in the schema.
Hi,
Currently there is not way to configure the output directory for generated files. They are generated by default in the target
directory.
As the generated files are source
files, so putting them in target
doesn't sound very right. Therefore, I think it would be very helpful to set output
folder in plugin to be able put them somewhere under src
directory.
Please share your thoughts.
Thanks
Lakha
Sangria exposes several additional custom scalars. https://sangria-graphql.org/learn/#scalar-types
When running sbt graphqlSchemaGen, the BigDecimal custom scalar is not included in the generated schema.
I expect that perhaps it may just be something I am missing from the documentation, and any guidance would be greatly appreciated.
I'll just document this here, sadly I don't have the time to fix it right now.
Version used: 0.16.4
Given the following query:
query Test {
content {
__typename
... on Foo {
slug
}
... on Bar {
id
foo {
slug
}
}
}
}
The generated code looks like this (only the relevant part):
object Content {
case class Foo(__typename: String, slug: String) extends Content
object Foo {
implicit val jsonDecoder: Decoder[Foo] = deriveDecoder[Foo]
implicit val jsonEncoder: Encoder[Foo] = deriveEncoder[Foo]
}
case class Bar(__typename: String, id: Int, foo: Content.Foo) extends Content
object Bar {
case class Foo(__typename: String, slug: String)
object Foo {
implicit val jsonDecoder: Decoder[Foo] = deriveDecoder[Foo]
implicit val jsonEncoder: Encoder[Foo] = deriveEncoder[Foo]
}
implicit val jsonDecoder: Decoder[Bar] = deriveDecoder[Bar]
implicit val jsonEncoder: Encoder[Bar] = deriveEncoder[Bar]
}
implicit val jsonDecoder: Decoder[Content] = for {
typeDiscriminator <- Decoder[String].prepare(_.downField("__typename"))
value <- typeDiscriminator match {
case "Foo" =>
Decoder[Foo]
case "Bar" =>
Decoder[Bar]
case other =>
Decoder.failedWithMessage("invalid type: " + other)
}
} yield value
implicit val jsonEncoder: Encoder[Content] = Encoder.instance[Content]({
case v: Foo =>
deriveEncoder[Foo].apply(v)
case v: Bar =>
deriveEncoder[Bar].apply(v)
})
}
There are two bugs here:
Foo
and Bar
have case class fields for __typename
, which means that we have to request __typename
in the query for both of them or the decoding will fail for no good reason. It would make more sense to generate constants, since we already know the value anyway and it never changes.// Instead of
case class Foo(__typename: String, slug: String)
// we should generate
case class Foo(slug: String) {
def __typename: String = "Foo"
}
// or maybe even
case class Foo(slug: String) {
def __typename: String = Foo.__typename
}
object Foo {
def __typename: String = "Foo"
}
Inside of Bar
we reference Foo
for the second union case in the query. In the generated code we have a dedicated Bar.Foo
generated but for some dumb reason, we use Content.Foo
in Bar
and Bar.Foo
is unused. This means the whole thing doesn't work if the two cases use different fields from Foo
.
I'm not sure if there is a reason for this (fragments?) or if it's just an honest mistake.
Not really a bug but strange: The implementation for Encoder[Content]
derives encoders inline instead of using the existing ones. The equivalent Decoder
doesn't do this. Imo we shouldn't derive something inline, especially if it already exists.
Hey guys, super new to graphql/sangria, so I apologize in advance if my issue is misplaced.
I am attempting to generate my result classes from a given src/main/resources/schema.graphql
file containing the following lines:
type Query {
transaction: Transaction!
}
type Transaction {
name: String
date: String
id: ID!
}
My project/plugins.sbt file looks like this:
addSbtPlugin("io.spray" % "sbt-revolver" % "0.9.1")
addSbtPlugin("com.heroku" % "sbt-heroku" % "2.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.1")
addSbtPlugin("rocks.muki" % "sbt-graphql" % "0.5.0")
and build.sbt looks like this:
import rocks.muki.graphql.schema.SchemaLoader
name := "test"
version := "0.0.1"
scalaVersion := "2.12.4"
scalacOptions ++= Seq("-deprecation", "-feature")
libraryDependencies ++= Seq(
"org.sangria-graphql" %% "sangria" % "1.3.0",
"org.sangria-graphql" %% "sangria-circe" % "1.1.0",
"org.sangria-graphql" %% "sangria-relay" % "1.4.1",
"com.typesafe.akka" %% "akka-http" % "10.1.0",
"de.heikoseeberger" %% "akka-http-circe" % "1.20.0",
"io.circe" %% "circe-core" % "0.9.2",
"io.circe" %% "circe-parser" % "0.9.2",
"io.circe" %% "circe-optics" % "0.9.2",
"com.github.nscala-time" %% "nscala-time" % "2.18.0",
"org.scalatest" %% "scalatest" % "3.0.5" % Test
)
Revolver.settings
enablePlugins(JavaAppPackaging)
enablePlugins(GraphQLSchemaPlugin, GraphQLQueryPlugin)
enablePlugins(GraphQLCodegenPlugin)
graphqlCodegenStyle := Sangria
val inputSchema = new File("src/main/resources/schema.graphql")
graphqlSchemas += GraphQLSchema(
"transactions",
"transactions schema",
Def.task(
SchemaLoader.fromFile(inputSchema).loadSchema()
).taskValue
)
graphqlCodegenSchema := graphqlRenderSchema.toTask("transactions").value
When I run
sbt graphqlCodegen
the operation succeeds and creates a bunch of new files inside of the target/ folder. Here is the success message:
Harrisons-MacBook-Pro:coinecta-graphql-server harrisonwang$ sbt graphqlCodegen
[info] Loading settings from idea.sbt ...
[info] Loading global plugins from /Users/harrisonwang/.sbt/1.0/plugins
[info] Loading settings from assembly.sbt,plugins.sbt ...
[info] Loading project definition from /Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/project
[info] Loading settings from build.sbt ...
[info] Set current project to test (in build file:/Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/)
[info] Rendering schema to: /Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/target/graphql/transactions.graphql
[info] Generate code for 1 queries
[info] Use schema /Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/target/graphql/transactions.graphql for query validation
[success] Total time: 1 s, completed May 18, 2018 11:26:01 PM
Unfortunately, none of these files (to my knowledge) contain the desired generated code; by reading the docs, I expected the result classes to live in a package called graphql.codegen
. The file target/scala-2.12/classes/graphql/codegen/GraphQLCodegen.class
has the correct package name, but the file itself is empty:
package graphql.codegen
object GraphQLCodegen extends scala.AnyRef {
}
I think the command is accessing the correct schema.graphql
file, because it creates a new file target/graphql/transactions.graphql
which contains the same schema, so I am not sure where my mistake in the process is.
If anyone could give me any tips/pointers on how to get the code generation working properly, it would be much appreciated. Sorry for the lengthy post, but I wanted to provide as much context as possible. Thanks a bunch!
The Apollo codegen style doesn't support input-object-values
Replace the travis build and release with github actions
Hey,
currently only Decoder instances for the returned graphql result are generated. But the input variables of the query are usually sent via json to graphql as well.
It would be beneficial to add the Encoder derivation for the Variables case class.
I'm not intimately familiar with the graphql spec, so I'm not sure whether input variables can be nested, but if not deriving the Encoder for the top-level Variables case class might be enough.
The type is redundant and adds unnecessary complexity.
We fixate the type to sangria.ast.Document
.
Hi @muuki88. Thank you for the nice plugin!
I'm using pretty default sbt-graphql configuration in Play project. Ran into issue with code generation, which happens only in test phase with error similar to: Symbol 'type graphql.codegen.types.myInputType' is missing from the classpath.
I think this may be related to the different generated Interfaces.scala
files in /main
and in /test
/* ../web/target/scala-2.13/src_managed/main/sbt-graphql/Interfaces.scala */
package graphql.codegen
import io.circe.{ Decoder, Encoder }
import io.circe.generic.semiauto.{ deriveDecoder, deriveEncoder }
object types {
case class myInputType(questionId: Option[String], userAnswers: List[String])
case object myInputType {
implicit val jsonDecoder: Decoder[myInputType] = deriveDecoder[myInputType]
implicit val jsonEncoder: Encoder[myInputType] = deriveEncoder[myInputType]
}
}
Classes are missed in file generated in /test
/* ../web/target/scala-2.13/src_managed/test/sbt-graphql/Interfaces.scala*/
package graphql.codegen
import io.circe.{ Decoder, Encoder }
import io.circe.generic.semiauto.{ deriveDecoder, deriveEncoder }
object types
Any help would be greatly appreciated!
Alias URI to String.
GraphQLSchemaLoader.fromFile
currently parses a file according to the graphql schema definition language syntax.
It would help to load graphql schema definition serialized in json -- e.g., like the github schema;
see: https://developer.github.com/v4/guides/intro-to-graphql/#discovering-the-graphql-api
Hi @muuki88, thanks for providing this!
I had a question about getting started. In my build.sbt
I added enablePlugins(GraphQLCodegenPlugin)
and nothing else related to the plugin because I have a file src/main/resources/schema.graphql
, which I believe is the default value. I ran sbt graphqlCodegenSchema
, but nothing happened. I didn't understand if anything else was required. Am I missing something?
The Apollo GraphQL webpack loader can handle magic imports to include fragments.
query MyQuery {
#import fragments/frag.graphql
#import fragments/more.graphql
}
Also see apollographql/graphql-tag/issues/74
With the new 0.10.1 release I have a problem with "additional imports". The imports are still inserted into the generated sources, but now have back-ticks that aren't correct:
Additional imports:
graphqlCodegenImports ++= Seq(
"java.time._",
"io.circe.java8.time._",
),
Compiler complains:
[error] SomeGeneratedSourceFile.scala:2:8: not found: object java.time
[error] import `java.time`._
[error] ^
[error] SomeGeneratedSourceFile.scala:3:8: not found: object io.circe.java8.time
[error] import `io.circe.java8.time`._
[error] ^
My guess is that scalameta thinks that java.time
is one big identifier instead of a package path and therefore quotes it.
The code generation for fragments produced invalid / not compilable scala code.
A simple schema with a union type
union Animal = Cat | Dog
type Cat {
catName: String!
}
type Dog {
dogName: String!
}
Query {
animals: [Animal!]!
}
The animal fragments
fragment AnimalName on Animal {
...DogName
...CatName
}
fragment DogName on Dog {
name
}
fragment CatName on Cat {
name
}
And use the fragment
# import fragements/animalName.fragment.graphql
query AllAnimals {
animals {
...AnimalName
}
}
The fragment and query should generate the following types in the Interfaces.scala
trait AnimalName
trait DogName extends AnimalName {
def name: String
}
trait CatName extends AnimalName {
def name: String
}
And the query object code should use these types
object GetAnimals {
object GetAnimals extends GraphQLQuery {
val document: sangria.ast.Document = graphql"""..."""
case class Variables()
case class Data(animals: List[AnimalName])
case class DogName(name: String) extends DogName
case class CatName(name: String) extends CatName
}
}
DogName
is defined in the Interfaces.scala
and in the GetAnimals$GetAnimals
objectdecoder
/ encoder
above the data
encoder so it can find it during derivationThe name of the scalafmt config file is wrong. It should be .scalafmt.conf
but is currently .scalafmt
.
This means all scalafmt tools uses their default, which is sometimes not 100% the same. For example my IntelliJ scalafmt-plugin does something different than the sbt plugin used in this project.
Since fixing this would require a one-time reformatting of all code, I recommend doing this only if there are no open branches/PRs.
It is probably a good idea to fix this in the "test-project" as well.
Hey, thanks for doing this project!
We have used it to secure stability of our API and detect changes to the API in PRs at Humio.
I was wondering if it would be possible to make the ordering of the entries in the generated schema stable.
Currently if you change the order of fields in Sangria, it will also change in the output.
We can sort the in code, but it would be useful in the plugin, so reordering does not trigger a false positive for out "API change" check.
Similar to apollo-codegen it would be nice to generate domain classes from queries against a schema.
A lot of this has been prototyped in sangria-codegen, however, the code generator should ideally use Sangria's AST directly instead of an intermediate representation based on an initial pass.
Open questions:
Excuse me, we use graphql-java a lot,can this plugin support graphql-Java with A PR? Although I don't know if graphql-Java can provider parser or validation
It would be helpful to sbt users if they could see what the value of these settings/tasks are, by turning the functions into classes defining toString
.
The generated code may require additional imports for
For this reason a user should be able to specify additional imports. An API could look like this.
In your build.sbt
graphqlCodegenImports += "com.example.time.codecs._"
The additional imports should be available in the codegen context.
Hi Guys,
I am trying to generate a schema file
which is a combination of Local Schema and Results of an Introspection:
graphqlSchemaSnippet := "demo.StaticSchema.schema"
target in graphqlSchemaGen := target.value / "graphql"
Above config generates, .schema
file from my Local Schema. I also want to generate a schema file from my introspection result
graphqlSchemas += GraphQLSchema(
"Remote",
"Remote Schema",
Def.task(
GraphQLSchemaLoader
.fromIntrospection("http://localhost:8081/graphql", streams.value.log)
.loadSchema()
).taskValue
)
But, after running sbt graphqlSchemaGen
i don't see any intropection schema? I guess i am missing some piece and not able to figure out from documentation.
Any help would be appreciated. Thanks
All decoder are generated except the top level Data
type decoder.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.