GithubHelp home page GithubHelp logo

muuki88 / sbt-graphql Goto Github PK

View Code? Open in Web Editor NEW
95.0 6.0 20.0 337 KB

SBT plugin to generate and validate graphql schemas written with Sangria

License: Apache License 2.0

Scala 100.00%
sbt graphql schema-generation schema-validation sbt-plugin

sbt-graphql's Issues

Examples for common use-cases

While the documentation in the readme is generally good, the plugin now supports multiple independent features, so it's harder to understand the big picture. Recent issues (#83, #84, #85) are all at least in part about misunderstanding/confusing the plugin features.

Imo it would be nice to have examples for common use-cases like:

  • I want to connect to an existing GraphQL-api and want to generate the Scala-types from my queries
  • I'm writing a GraphQL-api (using sangria) and want to render the schema as a file
  • I have existing queries and a schema and just want to validate they match via sbt
  • I have a repository with both server/client, how do I use the plugin with that

I think we could add a new folder examples and put a minimal project for each use-case there and link them from the readme.

(Maybe we could even sneak in an example of our BreakingChangeSuite, as the topic comes up sometimes and it would be nice to have something to link to)

@muuki88 WDYT?

Generate code from schemas

It would be very useful to be able to leverage code generation to make case classes for a Sangria server with an SDL/IDL schema (within a .graphql file, Sangria calls this schema materialization).

Is that a feasible feature addition based on the code generation code being added to this project?

jawn parser library incompatibility

With sbt 1.2.8, I'm running into a library conflict with jawn parser similar to this:
https://stackoverflow.com/questions/47227060/ammonite-classpath-clashes-with-github4s-java-lang-abstractmethoderror

Except for different URLs, the build.sbt is similar to the doc:

graphqlSchemas += GraphQLSchema(
  "sangria-example",
  "staging schema at http://try.sangria-graphql.org/graphql",
  Def.task(
    GraphQLSchemaLoader
      .fromIntrospection("http://try.sangria-graphql.org/graphql", streams.value.log)
      .loadSchema()
  ).taskValue
)

The stack trace is this:

[error] java.lang.AbstractMethodError
[error] 	at jawn.CharBasedParser.parseString(CharBasedParser.scala:90)
[error] 	at jawn.CharBasedParser.parseString$(CharBasedParser.scala:87)
[error] 	at jawn.StringParser.parseString(StringParser.scala:15)
[error] 	at jawn.Parser.rparse(Parser.scala:428)
[error] 	at jawn.Parser.parse(Parser.scala:338)
[error] 	at jawn.SyncParser.parse(SyncParser.scala:24)
[error] 	at jawn.SupportParser.$anonfun$parseFromString$1(SupportParser.scala:15)
[error] 	at jawn.SupportParser.parseFromString(SupportParser.scala:15)
[error] 	at jawn.SupportParser.parseFromString$(SupportParser.scala:14)
[error] 	at io.circe.jawn.CirceSupportParser$.parseFromString(CirceSupportParser.scala:7)
[error] 	at io.circe.jawn.JawnParser.parse(JawnParser.scala:16)
[error] 	at io.circe.parser.package$.parse(package.scala:8)
[error] 	at rocks.muki.graphql.schema.IntrospectSchemaLoader.introspect(SchemaLoader.scala:118)
[error] 	at rocks.muki.graphql.schema.IntrospectSchemaLoader.loadSchema(SchemaLoader.scala:85)

In Intellij, I see that under External Libraries, there is an entry for sbt: sbt-and-plugins.
In there, there are indeed two versions of jawn parser:

jawn-parser_2.12-0.10.4.jar (from SBT itself)
jawn-parser_2.12-0.11.1.jar (from circe-jawn)

It is unclear how to resolve this library conflict.

Add `document` to GraphQLQuery trait

The GraphQLQuery trait only contains a type Document, but not a def document: Document to access the actual content. This makes accessing it rather tricky.

How to reorder execution of sbt-graphql plugins?

Hi Guys, I am trying to achieve following goals:

  1. Generate a .schema file from my Schema Definition
  2. Validate queries using .schema file generated in 1st step
  3. Generate code for the queries using .schema file generated in 1st step

I am able to achieve first 2 goals via following configuration in build.sbt:

// generate a schema file
graphqlSchemaSnippet := "root.SchemaDefinition.StarWarsSchema"
target in graphqlSchemaGen := new File("src/main/graphql/schema")

// Validating queries
sourceDirectory in (Test, graphqlValidateQueries) := new File("src/main/graphql/queries")

enablePlugins(GraphQLSchemaPlugin, GraphQLQueryPlugin)

and when i run sbt graphqlValidateQueries, You can see from logs that it first creates a schema file and then validates my queries against that schema file.

[info] Running graphql.SchemaGen /Users/gschambial/git/sangria-akka-http-example/src/main/graphql/schema/schema.graphql
[info] Generating schema in src/main/graphql/schema/schema.graphql
[info] Checking graphql files in src/main/graphql/queries
[info] Validate src/main/graphql/queries/human.graphql
[success] All 1 graphql files are valid
[success] Total time: 6 s, completed Apr 22, 2020 10:50:49 AM

Now to achieve goal no. 3, when I enable GraphQLCodegenPlugin I start getting errors:

// generate a schema file
graphqlSchemaSnippet := "root.SchemaDefinition.StarWarsSchema"
target in graphqlSchemaGen := new File("src/main/graphql/schema")

// Validating queries
sourceDirectory in (Test, graphqlValidateQueries) := new File("src/main/graphql/queries")

// telling Codegen plugin to use this schema file
graphqlCodegenSchema := new File("src/main/graphql/schema/schema.graphql")

enablePlugins(GraphQLSchemaPlugin, GraphQLQueryPlugin, GraphQLCodegenPlugin)

I am expecting the order of execution to be:

  1. Generate Schema
  2. Validate Queries
  3. Generate code for queries

But, no matter which command I trigger sbt graphqlSchemaGen sbt graphqlValidateQueries or sbt graphqlCodegen, following order is executed:

  1. Generate code for queries
  2. Generate schema
  3. Validate schema
[info] Set current project to sangria-akka-http-example (in build file:/Users/gschambial/git/sangria-akka-http-example/)
[info] Generate code for 1 queries
[info] Use schema src/main/graphql/schema/schema.graphql for query validation
[error] java.io.FileNotFoundException: src/main/graphql/schema/schema.graphql (No such file or directory)
[error] (Compile / graphqlCodegen) java.io.FileNotFoundException: src/main/graphql/schema/schema.graphql (No such file or directory)

As, schema is not generated before step 1 is executed it fails. Is there a way to reorder this execution? Apologies, if it is some noob sbt issue.

Any help would be highly appreciated.

Fix generation of interfaces

The scalameta source generator currently has an emitInterfaces flag which allows to generate a Scala trait for each GraphQL fragment. The initial goal was to make it easier to use the generated code by coding against "interfaces" rather than the "unique" result classes that is currently generated.

The code was disabled in mediative/sangria-codegen#11 since it still requires some work to support fragment with nested selections, e.g.:

fragment ArticleWithAuthorFragment on Article {
  title
  author {
    ...IdFragment
    ...AuthorFragment
  }
}

Alternatively if this is not actually useful for users, the related code could just be removed.

Make graphqlValidateQueries setting more configurable

The graphqlValidateQueries should work like the graphqlCodegenQueries in the CodegenPlugin and use the include and exclude filter settings.

    resourceDirectories in graphqlValidateQueries := (resourceDirectories in Compile).value,
    includeFilter in graphqlCodegen := "*.graphql",
    excludeFilter in graphqlCodegen := HiddenFileFilter,
    graphqlCodegenQueries := Defaults
      .collectFiles(resourceDirectories in graphqlValidateQueries,
                    includeFilter in graphqlValidateQueries,
                    excludeFilter in graphqlValidateQueries)
      .value

[question] Can I annotate generated classes?

The generated code keeps producing spurious warnings on my project, which is rather annoying. I'd like to annotate them with @silent from the silencer plugin. Is there any way I can do that?

Only regenerate Scala files from GraphQL schema if schema changes

In our codebase I observed that the generated files are often recompiled without explicitly cleaning them or changing the schema. This isn't a big deal for small schemas but in our case it's over 400 files that are being recompiled and takes several seconds.

I don't have a repro-case yet but it should be fairly straight forward. I can imagine using one of the test/example projects and checking how often the files are being regenerated (it should only happen if the schema changed or after an explicit clean command).

sbt already has facilities to avoid unnecessary work (e.g. used to avoid compiling files that haven't changed), so we should use that here as well imo.

References:
https://www.scala-sbt.org/1.x/docs/Caching.html
https://www.scala-sbt.org/1.x/docs/Cached-Resolution.html

Scripted tests

It would be nice to have scripted tests for the plugin. I'm not currently sure how to manually test things using the test-project.

Yet another trouble getting codegen to work

Hello,

I'm having a world of trouble trying to generate scala code from a graphql schema at the moment.
So basically, my company runs a Python based graphql server and what I'm trying to do is to download the schema from it and generate Scala code using sbt-graphql.

I was able to get the schema by using graphql-cli. (schema.graphql)
It's a single file with all types and enums included.
Could you please help me learn what to do next?
In other posts I read that queries should be in separate .graphql files, but how would I separate this schema file to those?
If I just follow the README here, "sbt graphqlCodegenSchema" generates files but I don't see any that are related to my schema.

Please help...

Not available on maven?

My sbt build fails to find addSbtPlugin("rocks.muki" % "sbt-graphql" % "0.10.1") although it used to work before.

  • Did someone remove the rocks.muki groupId?
  • Is it just me having trouble getting it to show up in search too?
  • Or are plugins kept and perhaps resolved somewhere else? Not an SBT dev ;)

codegen not generating custom scalar types.

Hi,

I might be missing something, but I have the following issue.

I'm fetching the schema from a remote server and trying to generate case classes for a query.

in my build.sbt I have:

graphqlSchemas += GraphQLSchema(
  "staging",
  "staging schema at https://api.acme.com/graphql",
  Def.task(
    GraphQLSchemaLoader
      .fromIntrospection("https://api.acme.com/graphql", streams.value.log)
      .loadSchema()
  ).taskValue
)

graphqlCodegenSchema := graphqlRenderSchema.toTask("staging").value

This schema has some scalars defined

e.g.

# This represents a json encoded into a String.
scalar JsonString

which is then used in types

type MyAwesomeType implements Node {
   [...]
   meta: JsonString!
   [...]
}

code generation works, but compilation fails.

In the generated code, it appears as:

case class MyAwesomeInput(something: String, ..., meta: JsonString)

which will fail at compilation step, because JsonString isn't defined:

[error] /.../acme/target/scala-2.12/src_managed/sbt-graphql/AcmeMutation.scala:27:135: not found: type JsonString
[error]   case class MyAwesomeType(something: String, ..., meta: JsonString)

Interestingly, ID is defined as type ID = String. I guess it's a built in GraphQL type, since I don't see it defined in the schema anywhere explicitly.

I suspect the same might apply to enums defined in the schema.

Code generation - The output directory for generated files should be configurable.

Hi,

Currently there is not way to configure the output directory for generated files. They are generated by default in the target directory.

As the generated files are source files, so putting them in target doesn't sound very right. Therefore, I think it would be very helpful to set output folder in plugin to be able put them somewhere under src directory.

Please share your thoughts.

Thanks
Lakha

Code generation for unions is flawed

I'll just document this here, sadly I don't have the time to fix it right now.

Version used: 0.16.4

Given the following query:

query Test {
    content {
      __typename

      ... on Foo {
        slug
      }

      ... on Bar {        
        id
        foo {          
          slug
        }
      }
    }
}    

The generated code looks like this (only the relevant part):

object Content {
  case class Foo(__typename: String, slug: String) extends Content
  object Foo {
    implicit val jsonDecoder: Decoder[Foo] = deriveDecoder[Foo]
    implicit val jsonEncoder: Encoder[Foo] = deriveEncoder[Foo]
  }

  case class Bar(__typename: String, id: Int, foo: Content.Foo) extends Content
  object Bar {
    case class Foo(__typename: String, slug: String)
    object Foo {
      implicit val jsonDecoder: Decoder[Foo] = deriveDecoder[Foo]
      implicit val jsonEncoder: Encoder[Foo] = deriveEncoder[Foo]
    }
    implicit val jsonDecoder: Decoder[Bar] = deriveDecoder[Bar]
    implicit val jsonEncoder: Encoder[Bar] = deriveEncoder[Bar]
  }

  implicit val jsonDecoder: Decoder[Content] = for {
    typeDiscriminator <- Decoder[String].prepare(_.downField("__typename"))
    value <- typeDiscriminator match {
      case "Foo" =>
        Decoder[Foo]
      case "Bar" =>
        Decoder[Bar]
      case other =>
        Decoder.failedWithMessage("invalid type: " + other)
    }
  } yield value
  implicit val jsonEncoder: Encoder[Content] = Encoder.instance[Content]({
    case v: Foo =>
      deriveEncoder[Foo].apply(v)
    case v: Bar =>
      deriveEncoder[Bar].apply(v)
  })
}

There are two bugs here:

  1. Foo and Bar have case class fields for __typename, which means that we have to request __typename in the query for both of them or the decoding will fail for no good reason. It would make more sense to generate constants, since we already know the value anyway and it never changes.
// Instead of
case class Foo(__typename: String, slug: String)

// we should generate
case class Foo(slug: String) {
  def __typename: String = "Foo"
}

// or maybe even
case class Foo(slug: String) {
  def __typename: String = Foo.__typename
}
object Foo {
  def __typename: String = "Foo"
}
  1. Inside of Bar we reference Foo for the second union case in the query. In the generated code we have a dedicated Bar.Foo generated but for some dumb reason, we use Content.Foo in Bar and Bar.Foo is unused. This means the whole thing doesn't work if the two cases use different fields from Foo.
    I'm not sure if there is a reason for this (fragments?) or if it's just an honest mistake.

  2. Not really a bug but strange: The implementation for Encoder[Content] derives encoders inline instead of using the existing ones. The equivalent Decoder doesn't do this. Imo we shouldn't derive something inline, especially if it already exists.

Trouble getting codegen to work

Hey guys, super new to graphql/sangria, so I apologize in advance if my issue is misplaced.

I am attempting to generate my result classes from a given src/main/resources/schema.graphql file containing the following lines:

type Query {
  transaction: Transaction!
}

type Transaction {
  name: String
  date: String
  id: ID!
}

My project/plugins.sbt file looks like this:

addSbtPlugin("io.spray" % "sbt-revolver" % "0.9.1")
addSbtPlugin("com.heroku" % "sbt-heroku" % "2.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.1")
addSbtPlugin("rocks.muki" % "sbt-graphql" % "0.5.0")

and build.sbt looks like this:

import rocks.muki.graphql.schema.SchemaLoader

name := "test"
version := "0.0.1"

scalaVersion := "2.12.4"
scalacOptions ++= Seq("-deprecation", "-feature")

libraryDependencies ++= Seq(
  "org.sangria-graphql" %% "sangria" % "1.3.0",
  "org.sangria-graphql" %% "sangria-circe" % "1.1.0",
  "org.sangria-graphql" %% "sangria-relay" % "1.4.1",

  "com.typesafe.akka" %% "akka-http" % "10.1.0",
  "de.heikoseeberger" %% "akka-http-circe" % "1.20.0",

  "io.circe" %%	"circe-core" % "0.9.2",
  "io.circe" %% "circe-parser" % "0.9.2",
  "io.circe" %% "circe-optics" % "0.9.2",
  "com.github.nscala-time" %% "nscala-time" % "2.18.0",

  "org.scalatest" %% "scalatest" % "3.0.5" % Test
)

Revolver.settings
enablePlugins(JavaAppPackaging)
enablePlugins(GraphQLSchemaPlugin, GraphQLQueryPlugin)
enablePlugins(GraphQLCodegenPlugin)

graphqlCodegenStyle := Sangria

val inputSchema = new File("src/main/resources/schema.graphql")
graphqlSchemas += GraphQLSchema(
  "transactions",
  "transactions schema",
  Def.task(
    SchemaLoader.fromFile(inputSchema).loadSchema()
  ).taskValue
)

graphqlCodegenSchema := graphqlRenderSchema.toTask("transactions").value

When I run

sbt graphqlCodegen the operation succeeds and creates a bunch of new files inside of the target/ folder. Here is the success message:

Harrisons-MacBook-Pro:coinecta-graphql-server harrisonwang$ sbt graphqlCodegen
[info] Loading settings from idea.sbt ...
[info] Loading global plugins from /Users/harrisonwang/.sbt/1.0/plugins
[info] Loading settings from assembly.sbt,plugins.sbt ...
[info] Loading project definition from /Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/project
[info] Loading settings from build.sbt ...
[info] Set current project to test (in build file:/Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/)
[info] Rendering schema to: /Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/target/graphql/transactions.graphql
[info] Generate code for 1 queries
[info] Use schema /Users/harrisonwang/Desktop/coinecta/coinecta-graphql-server/target/graphql/transactions.graphql for query validation
[success] Total time: 1 s, completed May 18, 2018 11:26:01 PM

Unfortunately, none of these files (to my knowledge) contain the desired generated code; by reading the docs, I expected the result classes to live in a package called graphql.codegen. The file target/scala-2.12/classes/graphql/codegen/GraphQLCodegen.class has the correct package name, but the file itself is empty:

package graphql.codegen
object GraphQLCodegen extends scala.AnyRef {
}

I think the command is accessing the correct schema.graphql file, because it creates a new file target/graphql/transactions.graphql which contains the same schema, so I am not sure where my mistake in the process is.

If anyone could give me any tips/pointers on how to get the code generation working properly, it would be much appreciated. Sorry for the lengthy post, but I wanted to provide as much context as possible. Thanks a bunch!

Generate circe Encoder for query Variables

Hey,

currently only Decoder instances for the returned graphql result are generated. But the input variables of the query are usually sent via json to graphql as well.

It would be beneficial to add the Encoder derivation for the Variables case class.

I'm not intimately familiar with the graphql spec, so I'm not sure whether input variables can be nested, but if not deriving the Encoder for the top-level Variables case class might be enough.

Question about code generation issue in test phase

Hi @muuki88. Thank you for the nice plugin!

I'm using pretty default sbt-graphql configuration in Play project. Ran into issue with code generation, which happens only in test phase with error similar to: Symbol 'type graphql.codegen.types.myInputType' is missing from the classpath.
I think this may be related to the different generated Interfaces.scala files in /main and in /test

/* ../web/target/scala-2.13/src_managed/main/sbt-graphql/Interfaces.scala */
package graphql.codegen
import io.circe.{ Decoder, Encoder }
import io.circe.generic.semiauto.{ deriveDecoder, deriveEncoder }
object types {
  case class myInputType(questionId: Option[String], userAnswers: List[String])
  case object myInputType {
    implicit val jsonDecoder: Decoder[myInputType] = deriveDecoder[myInputType]
    implicit val jsonEncoder: Encoder[myInputType] = deriveEncoder[myInputType]
  }
}

Classes are missed in file generated in /test

/* ../web/target/scala-2.13/src_managed/test/sbt-graphql/Interfaces.scala*/
package graphql.codegen
import io.circe.{ Decoder, Encoder }
import io.circe.generic.semiauto.{ deriveDecoder, deriveEncoder }
object types

Any help would be greatly appreciated!

Code generation question

Hi @muuki88, thanks for providing this!

I had a question about getting started. In my build.sbt I added enablePlugins(GraphQLCodegenPlugin) and nothing else related to the plugin because I have a file src/main/resources/schema.graphql, which I believe is the default value. I ran sbt graphqlCodegenSchema, but nothing happened. I didn't understand if anything else was required. Am I missing something?

Additional imports are quoted and therefore broken

With the new 0.10.1 release I have a problem with "additional imports". The imports are still inserted into the generated sources, but now have back-ticks that aren't correct:

Additional imports:

graphqlCodegenImports ++= Seq(
  "java.time._",
  "io.circe.java8.time._",
),

Compiler complains:

[error] SomeGeneratedSourceFile.scala:2:8: not found: object java.time
[error] import `java.time`._
[error]        ^
[error] SomeGeneratedSourceFile.scala:3:8: not found: object io.circe.java8.time
[error] import `io.circe.java8.time`._
[error]        ^

My guess is that scalameta thinks that java.time is one big identifier instead of a package path and therefore quotes it.

Fragment code generation is broken for union types

The code generation for fragments produced invalid / not compilable scala code.

Example schema and query

A simple schema with a union type

union Animal = Cat | Dog
type Cat {
  catName: String!
}
type Dog {
  dogName: String!
}

Query {
  animals: [Animal!]!
}

The animal fragments

fragment AnimalName on Animal {
  ...DogName
  ...CatName
}

fragment DogName on Dog {
  name
}

fragment CatName on Cat {
  name
}

And use the fragment

# import fragements/animalName.fragment.graphql
query AllAnimals {
  animals {
    ...AnimalName
  }
}

Expected code

The fragment and query should generate the following types in the Interfaces.scala

trait AnimalName
trait DogName extends AnimalName {
   def name: String
}
trait CatName extends AnimalName {
  def name: String
}

And the query object code should use these types

object GetAnimals {
   object GetAnimals extends GraphQLQuery {
      val document: sangria.ast.Document = graphql"""..."""
      case class Variables()
      case  class Data(animals: List[AnimalName])
      case class DogName(name: String) extends DogName
      case class CatName(name: String) extends CatName
   }
}

Complexities

  • We need to make sure that we reference the correct trait in the generated query code. In the Example DogName is defined in the Interfaces.scala and in the GetAnimals$GetAnimals object
  • The circe code generation needs to put the decoder / encoder above the data encoder so it can find it during derivation

Fix .scalafmt.conf

The name of the scalafmt config file is wrong. It should be .scalafmt.conf but is currently .scalafmt.

This means all scalafmt tools uses their default, which is sometimes not 100% the same. For example my IntelliJ scalafmt-plugin does something different than the sbt plugin used in this project.

Since fixing this would require a one-time reformatting of all code, I recommend doing this only if there are no open branches/PRs.

It is probably a good idea to fix this in the "test-project" as well.

Stable ordering of output

Hey, thanks for doing this project!

We have used it to secure stability of our API and detect changes to the API in PRs at Humio.

I was wondering if it would be possible to make the ordering of the entries in the generated schema stable.

Currently if you change the order of fields in Sangria, it will also change in the output.

We can sort the in code, but it would be useful in the plugin, so reordering does not trigger a false positive for out "API change" check.

Generate code from queries

Similar to apollo-codegen it would be nice to generate domain classes from queries against a schema.

A lot of this has been prototyped in sangria-codegen, however, the code generator should ideally use Sangria's AST directly instead of an intermediate representation based on an initial pass.

Open questions:

  • Is it OK to use scala.meta for the code generation itself?
  • Is generation of traits from interfaces a "must have"? In the sangria-codegen project they were temporarily disabled to ensure progress. (see mediative/sangria-codegen#12)

about graphql-Java

Excuse me, we use graphql-java a lot,can this plugin support graphql-Java with A PR? Although I don't know if graphql-Java can provider parser or validation

Add setting for additional imports in generated code

The generated code may require additional imports for

  • semiderived json codecs
  • custom scalar types #26

For this reason a user should be able to specify additional imports. An API could look like this.

In your build.sbt

graphqlCodegenImports += "com.example.time.codecs._"

The additional imports should be available in the codegen context.

Schema generation from Introspection

Hi Guys,

I am trying to generate a schema file which is a combination of Local Schema and Results of an Introspection:

graphqlSchemaSnippet := "demo.StaticSchema.schema"

target in graphqlSchemaGen := target.value / "graphql"

Above config generates, .schema file from my Local Schema. I also want to generate a schema file from my introspection result

graphqlSchemas += GraphQLSchema(
  "Remote",
  "Remote Schema",
  Def.task(
    GraphQLSchemaLoader
      .fromIntrospection("http://localhost:8081/graphql", streams.value.log)
      .loadSchema()
  ).taskValue
)

But, after running sbt graphqlSchemaGen i don't see any intropection schema? I guess i am missing some piece and not able to figure out from documentation.

Any help would be appreciated. Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.