GithubHelp home page GithubHelp logo

twitter / rsc Goto Github PK

View Code? Open in Web Editor NEW
1.2K 1.2K 54.0 3.26 MB

Experimental Scala compiler focused on compilation speed

License: Apache License 2.0

Scala 99.52% Shell 0.11% Python 0.31% Java 0.05%

rsc's Introduction

Rsc

Reasonable Scala compiler (also known as Rsc) is an experimental Scala compiler focused on compilation speed. This project is developed by the Language Tools team at Twitter.

Rsc is not a fork, but a reimplementation of the Scala compiler. We believe that a performance-oriented rewrite will provide a unique perspective on compilation costs introduced by various Scala features and idioms - something that is currently very hard to quantify in existing compilers.

With Rsc, our mission is to complement official compilers and assist with their evolution through our experiments. We are aiming to discover actionable insight into Scala compiler architecture and language design that will help compiler developers at Lightbend and EPFL to optimize their compilers for the benefit of the entire Scala community.

Goals

  • Dramatically improve Scala compilation performance
  • Study compilation time overhead of various Scala features
  • Identify a subset of Scala that can be compiled with reasonable speed
  • Facilitate knowledge transfer to other Scala compilers

Non-goals

Status

Documentation

Credits

Our project is inspired by the work of Martin Odersky, Grzegorz Kossakowski and Denys Shabalin. Martin inspiringly showed that in this day and age it is still possible to write a Scala compiler from scratch. Greg unexpectedly demonstrated that compiling Scala can be blazingly fast. Denys shockingly revealed that Scala apps can have instant startup time.

rsc's People

Contributors

abhik1998 avatar densh avatar juliaferraioli avatar olafurpg avatar rorygraves avatar sundresh avatar wiwa avatar xeno-by avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rsc's Issues

Generate SemanticDB for re2s

SemanticDB is schema for semantic information produced by Scala compilers. Introduced less than a year ago in Scalameta, it has already become a foundation for nextgen developer tools for Scala (Scalafix, Metadoc and others). We are also using SemanticDB internally at Twitter to power the internal code intelligence infrastructure.

In Rsc, we would like to join this movement as well. Emitting SemanticDB will allow us to interoperate with the growing family of SemanticDB-based tools, getting our compiler closer to first-class tooling support. Let's see how much effort it will take to generate SemanticDB during compilation and what performance overhead that will incur.

Figure out the methodology for native benchmarks

At the moment, the benchmarking infrastructure for RscNativeTypecheck is very minimal. We should bring it on par with the robustness that JMH provides (at the very minimum, add support for score errors) or replace it with a more advanced tool.

Use release mode of Scala Native for benchmarks

It looks like you're building in debug mode for benchmarks, to fix this please add the following to the nativeSettings:

nativeMode := "release"

Debug mode has most of the optimizations disabled to improve compile times.

Carefully implement Scope.members

#70 introduced Scope.members, but only implemented it in SemanticdbScope.members because the implementation for other scopes needs migration from HashMap to LinkedHashMap, which may hurt performance. Let's do this experiment and see where that leads us.

Crosscompile to Scala 2.12

We need to crosscompile to Scala 2.12 to be compatible with Sbt 1.0. Scala Native doesn't support Scala 2.12 yet, but that's not a problem, because Sbt doesn't have a native build.

Non-static selections

class C {
  class X
}

object M {
  val c = new C
  def m: c.X = ???
}
Test.scala:7: error: compiler crash
  def m: c.X = ???
         ^
rsc.util.CrashException: val def <_empty_/M.c().> = new C()
DefnMethod(Mods(List(ModVal())), TermId("c"), Nil, Nil, None, Some(TermNew(Init(TptId("C"), Nil))))
	at rsc.util.CrashException$.apply(CrashException.scala:24)
	at rsc.util.ErrorUtil$class.crash(ErrorUtil.scala:18)
	at rsc.util.package$.crash(package.scala:5)
	at rsc.outline.Outliner.resolveScope(Outliner.scala:394)
	at rsc.outline.Outliner.loop$3(Outliner.scala:337)
	at rsc.outline.Outliner.apply(Outliner.scala:353)
	at rsc.outline.Outliner.rsc$outline$Outliner$$apply(Outliner.scala:254)
	at rsc.outline.Outliner.apply(Outliner.scala:232)
	at rsc.outline.Outliner.apply(Outliner.scala:19)
	at rsc.Compiler.rsc$Compiler$$outline(Compiler.scala:158)
	at rsc.Compiler$$anonfun$tasks$4.apply$mcV$sp(Compiler.scala:78)
	at rsc.Compiler$$anonfun$run$2.apply(Compiler.scala:34)
	at rsc.Compiler$$anonfun$run$2.apply(Compiler.scala:31)
	at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
	at rsc.Compiler.run(Compiler.scala:31)
	at rsc.cli.Main$.run(Main.scala:21)
	at rsc.cli.Main$.main(Main.scala:11)
	at rsc.cli.Main.main(Main.scala)

Compare performance of Rsc and Kentucky Mule

Kentucky Mule made quite a splash by announcing that they are ~2000x faster than Scalac on some benchmarks. Rsc is only ~20x faster than Scalac on some other benchmarks.

It would be great to investigate where this performance difference comes from:

  • From some flaws in the implementation of the typechecker?
  • From Rsc doing more work than Kentucky Mule (and Scalac doing even more work than Rsc)?
  • From us not having spent time microoptimizing Rsc and instead pushing to have a first prototype as soon as possible (more on that decision in our documentation)?
  • From differences in benchmarks?

Type inference

It seems to be very hard to properly support this language feature in Rsc in the near future, so this ticket is here for reference mostly. In the context of outlining, type inference involves:

  • Val/var/def return type inference
  • Type argument inference for inits (i.e. constructors and super constructors)
  • Polymorphic default parameters (see #331 for details).

Figure out the methodology for JVM benchmarks

Over the Thanksgiving break, I've been playing with various values for JMH annotations on our benchmarks:

@BenchmarkMode(Array(SampleTime))
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@Warmup(iterations = X, time = Y, timeUnit = TimeUnit.SECONDS)
@Measurement(iterations = X, time = Y, timeUnit = TimeUnit.SECONDS)
@Fork(value = Z, jvmArgs = Array("-Xms2G", "-Xmx2G"))
class HotRscTypecheck extends RscTypecheck {
  @Benchmark
  def run(bs: BenchmarkState): Unit = {
    runImpl(bs)
  }
}

The research question is: "Does there exist a reasonable combination of X, Y and Z that avoids a seemingly inevitable 1ms lower bound on run-to-run variance?".

Type projections

class C {
  class X
}

object M {
  def m: C#X = ???
}
Test.scala:6: error: illegal outline
  def m: C#X = ???
         ^
class C[T, U] {
  type X = Map[T, U]
}

object M {
  def m: C[Int, Int]#X = ???
}
failed rsc: C.scala:6: error: illegal outline
  def m: C[Int, Int]#X = ???
         ^

Update the roadmap

#85 has changed a lot in our plans for the future. Now the time has come to document the new plans.

Fix Intellij highlight errors

At the moment, IntelliJ doesn't work well on the Rsc codebase. After opening our project in IntelliJ, you may experience incomplete code intelligence, spurious red squiggles and other unpleasant issues. We believe that this is the case because of insufficient support for sbt-crossproject that we use to crosscompile Rsc to both JVM and Native. We haven't yet found a good workaround for this problem.

Refined types

class C {
  type X = { def x: Int }
}
different nsc (-) vs rsc (+): _empty_/C#X#
 signature {
   typeSignature {
     type_parameters {
     }
     lower_bound {
-      structuralType {
-        tpe {
-          withType {
-            types {
-              typeRef {
-                prefix {
-                }
-                symbol: "scala/AnyRef#"
-              }
-            }
-          }
-        }
-        declarations {
-          hardlinks {
-            symbol: "localNNN"
-            kind: METHOD
-            properties: 4
-            name: "x"
-            accessibility {
-              tag: PUBLIC
-              symbol: ""
-            }
-            language: SCALA
-            signature {
-              methodSignature {
-                type_parameters {
-                }
-                return_type {
-                  typeRef {
-                    prefix {
-                    }
-                    symbol: "scala/Int#"
-                  }
-                }
-              }
-            }
-          }
-        }
-      }
     }
     upper_bound {
-      structuralType {
-        tpe {
-          withType {
-            types {
-              typeRef {
-                prefix {
-                }
-                symbol: "scala/AnyRef#"
-              }
-            }
-          }
-        }
-        declarations {
-          hardlinks {
-            symbol: "localNNN"
-            kind: METHOD
-            properties: 4
-            name: "x"
-            accessibility {
-              tag: PUBLIC
-              symbol: ""
-            }
-            language: SCALA
-            signature {
-              methodSignature {
-                type_parameters {
-                }
-                return_type {
-                  typeRef {
-                    prefix {
-                    }
-                    symbol: "scala/Int#"
-                  }
-                }
-              }
-            }
-          }
-        }
-      }
     }
   }
 }

Benchmark alternative representations for ids

Scalac as well as Dotty have designated hashtables that intern names created by the compiler. For the sake of simplicity, we didn't implement analogous infrastructure in Rsc, but at some point we want experiment and see how much this can buy us.

Fix the broken benchbox

Looks like the expected kernel version in bin/bench_ci_environment and the actual kernel version on our benchbox have diverged. We need to do something about that.

Benchmark alternatives to j.u.HashMap

Both Scalac and Dotty use hand-written hashtables to represent scopes. Moreover, Kentucky Mule also uses hand-written hashtables, highlighting the fact that they perform better than j.u.HashMap. Let's see how alternative representations for scopes will affect our performance.

Consider making a TASTY backend

https://github.com/twitter/reasonable-scala/blob/master/docs/compiler.md currently says:

Codegen
We decided to postpone generation of executable code until we implement typechecking for a sizable subset of the language. Therefore, there is nothing to see in this section. Please come back later.

As I'm sure you're aware, going from the typechecker output to bytecode requires a huge amount of work. I wonder if you've considered outputting TASTY instead and using Dotty to emit bytecode (and Javascript in the future, at least at first? This would allow you to validate the design of your typechecker without developing a full compilation pipeline, it would also bring some other benefits:

  • Once you can unpickle TASTY, you can easily support separate compilation
  • In my experience, TASTY is great for IDE support
  • Two-way interoperability with Dotty becomes achievable (and maybe scalac interop in the future).

Currently, TASTY is informally specified in https://github.com/lampepfl/dotty/blob/master/compiler/src/dotty/tools/dotc/core/tasty/TastyFormat.scala, and a proof-of-concept executable specification exists at https://github.com/DarkDimius/tasty-kaitai-spec

Of course you may already have completely different plans for the future, this is just an idea I wanted to share :)

Restore automated performance benchmarks

I've temporarily disabled automated performance benchmarks to get #85 in. That went without a hitch, so now we'll need to restore the benchmarks. This may take a while, since the structure of the code and the benchmarks themselves have changed.

Benchmark Outline vs ClassInfoType/MethodType/etc

Following the Scalameta principles, Rsc represents definition signatures with trees instead of dedicated data structures like in Scalac and Dotty. We have found that this design decision simplifies the compiler, but we also want to understand whether it has any negative performance implications.

Subtyping checks?

I don't see any mention of subtyping in the code or the documentation (which is otherwise very well written and detailed for such a young project, nice job!). And indeed, the following happily typechecks:

class Foo()
class Bar()

object Test {
  val x: Bar = new Foo()
}

Will subtyping checks be implemented in the first milestone?

Benchmark the SemanticDB-based symbol table

SemanticDB is pretty interesting because it allows to develop tools that weren't possible before at scale that wasn't possible before. For more than a year, this allowed us not to care about its performance at all.

However, now the time has come to study and optimize SemanticDB. Let's start with benchmarking, just with the read-only aspect (since that's the only aspect that's relevant for M2, read-write will be part of M3).

Include both JVM and native benchmarks in a quick bench invocation

sbt benchJVM is very convenient, but unfortunately it conveniently omits native. If we want to be serious about native performance, we need to have it as a first-class citizen in all our benches - both quick ones and nightly ones. Performance is not portable between platforms.

Restore the native build

At the moment, I had to disable crosscompilation to Scala Native because of a mysterious segfault that started showing up recently. We should investigate this soon, so that our codebase doesn't diverge to non-crosscompilable state. I'll assign this ticket to the current milestone.

XML literals

class C {
  def xml = <foo />
}
Test.scala:2: error: compiler crash
  def xml = <foo />
           ^
rsc.util.CrashException: unsupported: xml literals
	at rsc.util.CrashException$.apply(CrashException.scala:24)
	at rsc.util.ErrorUtil$class.crash(ErrorUtil.scala:18)
	at rsc.util.package$.crash(package.scala:5)
	at rsc.scan.Scanner.dispatchNormal(Scanner.scala:254)
	at rsc.scan.Scanner.next(Scanner.scala:36)
	at rsc.parse.Newlines$in$.nextToken(Newlines.scala:53)
	at rsc.parse.Defns$class.defnDef(Defns.scala:60)
	at rsc.parse.Parser.defnDef(Parser.scala:10)
	at rsc.parse.Templates$$anonfun$templateBraces$1$$anonfun$apply$1.apply(Templates.scala:150)
	at rsc.parse.Templates$$anonfun$templateBraces$1$$anonfun$apply$1.apply(Templates.scala:90)
	at rsc.parse.Helpers$class.inBraces(Helpers.scala:65)
	at rsc.parse.Parser.inBraces(Parser.scala:10)
	at rsc.parse.Templates$$anonfun$templateBraces$1.apply(Templates.scala:90)
	at rsc.parse.Templates$$anonfun$templateBraces$1.apply(Templates.scala:87)
	at rsc.parse.Wildcards$class.banEscapingWildcards(Wildcards.scala:18)
	at rsc.parse.Parser.banEscapingWildcards(Parser.scala:10)
	at rsc.parse.Templates$class.templateBraces(Templates.scala:87)
	at rsc.parse.Templates$class.defnTemplate(Templates.scala:43)
	at rsc.parse.Parser.defnTemplate(Parser.scala:10)
	at rsc.parse.Defns$class.defnClass(Defns.scala:17)
	at rsc.parse.Parser.defnClass(Parser.scala:10)
	at rsc.parse.Sources$$anonfun$rsc$parse$Sources$$packageStats$1.apply(Sources.scala:71)
	at rsc.parse.Sources$$anonfun$rsc$parse$Sources$$packageStats$1.apply(Sources.scala:52)
	at rsc.parse.Wildcards$class.banEscapingWildcards(Wildcards.scala:18)
	at rsc.parse.Parser.banEscapingWildcards(Parser.scala:10)
	at rsc.parse.Sources$class.rsc$parse$Sources$$packageStats(Sources.scala:52)
	at rsc.parse.Sources$$anonfun$rsc$parse$Sources$$sourceStats$1.apply(Sources.scala:47)
	at rsc.parse.Sources$$anonfun$rsc$parse$Sources$$sourceStats$1.apply(Sources.scala:17)
	at rsc.parse.Wildcards$class.banEscapingWildcards(Wildcards.scala:18)
	at rsc.parse.Parser.banEscapingWildcards(Parser.scala:10)
	at rsc.parse.Sources$class.rsc$parse$Sources$$sourceStats(Sources.scala:17)
	at rsc.parse.Sources$class.source(Sources.scala:14)
	at rsc.parse.Parser.source(Parser.scala:10)
	at rsc.Compiler$$anonfun$rsc$Compiler$$parse$1.apply(Compiler.scala:97)
	at rsc.Compiler$$anonfun$rsc$Compiler$$parse$1.apply(Compiler.scala:84)
	at scala.collection.immutable.List.flatMap(List.scala:338)
	at rsc.Compiler.rsc$Compiler$$parse(Compiler.scala:84)
	at rsc.Compiler$$anonfun$tasks$1.apply$mcV$sp(Compiler.scala:75)
	at rsc.Compiler$$anonfun$run$2.apply(Compiler.scala:34)
	at rsc.Compiler$$anonfun$run$2.apply(Compiler.scala:31)
	at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
	at rsc.Compiler.run(Compiler.scala:31)
	at rsc.cli.Main$.run(Main.scala:21)
	at rsc.cli.Main$.main(Main.scala:11)
	at rsc.cli.Main.main(Main.scala)

Upgrade to Scala Native 0.3.6

0.3.5 includes a number of changes that improve performance of rsc. Here are some very rough numbers on first 20 iterations, before:

Time: 243822417 ns
Time: 174314736 ns
Time: 172057764 ns
Time: 180346046 ns
Time: 172030039 ns
Time: 173087314 ns
Time: 165330589 ns
Time: 164671580 ns
Time: 164822682 ns
Time: 165409924 ns
Time: 164684067 ns
Time: 164456743 ns
Time: 164551160 ns
Time: 164383594 ns
Time: 164386438 ns
Time: 164382765 ns
Time: 164515369 ns
Time: 165105908 ns
Time: 165113140 ns
Time: 164425373 ns

And after:

Time: 134191805 ns
Time: 104693424 ns
Time: 95394359 ns
Time: 93703848 ns
Time: 94290683 ns
Time: 94184145 ns
Time: 94216223 ns
Time: 94065560 ns
Time: 93797489 ns
Time: 93983223 ns
Time: 93736858 ns
Time: 93647207 ns
Time: 93837259 ns
Time: 94148113 ns
Time: 94109868 ns
Time: 93244565 ns
Time: 93401882 ns
Time: 93801207 ns
Time: 92924482 ns
Time: 93191116 ns

Update: 0.3.6 supersedes 0.3.5 due to regression in sbt testing integration.

Scanner.emitIdOrKeyword map.getOrElse generating anonymous fn garbage

Profiling compiles:
val token = keywords.getOrElse(lexeme, ID)
generates an anonymous function for the default value part (AbstractFunction0). It accounts for 1/3 of all Function0 instances created and is high on the garbage stakes.

I tried some simple stuff to get rid of the anon function and failed (replacing with get() effectively replaces it with an Some().

Switching the tokens map to a j.u.Map would solve it (but ugly)

Upgrade to Scala Native 0.3.7

Before we do that, we'll need to wait for Scalameta 3.7.0, because 3.6.0 is only published for my custom fork of Scala Native 0.3.6.

Scheduler.assignUid doing a lot of work via PrettyStr

From profiling:
Scheduler.assignUID is doing a lot of work when being called.
It calls sid.str which defers to Pretty.str which in turn defers to PrettyType.str

This has things like p.str("<" + uid + ">")
which creates a string (via concatenation to be appended to a StringBuilder later (should probably be p.str("<") ; p.str(uid) ; p.str(">")

10K calls to PrettySid.str generated:

  • 20K StringBuilders
  • 10K Ranges
  • 20K j.l.StringBuilders
    -10K StringOps instances

This is a fair way down the optimisation list (its not in the top 20 - but feels like this should not be in the hot path.

Implement -Xprint:scan and -Ystop-after:scan

Even though the scanner is a standalone module in Rsc, there's no scan phase, so it's impossible to get insight into what's going on there using CLI flags. We should fix this to improve hackability.

Non-wildcard existentials

class C {
  type X = List[T] forSome { type T }
}
different nsc (-) vs rsc (+): _empty_/C#X#
 signature {
   typeSignature {
     type_parameters {
     }
     lower_bound {
-      existentialType {
-        tpe {
-          typeRef {
-            prefix {
-            }
-            symbol: "scala/package.List#"
-            type_arguments {
-              typeRef {
-                prefix {
-                }
-                symbol: "localNNN"
-              }
-            }
-          }
-        }
-        declarations {
-          hardlinks {
-            symbol: "localNNN"
-            kind: TYPE
-            properties: 4
-            name: "T"
-            accessibility {
-              tag: PUBLIC
-              symbol: ""
-            }
-            language: SCALA
-            signature {
-              typeSignature {
-                type_parameters {
-                }
-                lower_bound {
-                  typeRef {
-                    prefix {
-                    }
-                    symbol: "scala/Nothing#"
-                  }
-                }
-                upper_bound {
-                  typeRef {
-                    prefix {
-                    }
-                    symbol: "scala/Any#"
-                  }
-                }
-              }
-            }
-          }
-        }
-      }
     }
     upper_bound {
-      existentialType {
-        tpe {
-          typeRef {
-            prefix {
-            }
-            symbol: "scala/package.List#"
-            type_arguments {
-              typeRef {
-                prefix {
-                }
-                symbol: "localNNN"
-              }
-            }
-          }
-        }
-        declarations {
-          hardlinks {
-            symbol: "localNNN"
-            kind: TYPE
-            properties: 4
-            name: "T"
-            accessibility {
-              tag: PUBLIC
-              symbol: ""
-            }
-            language: SCALA
-            signature {
-              typeSignature {
-                type_parameters {
-                }
-                lower_bound {
-                  typeRef {
-                    prefix {
-                    }
-                    symbol: "scala/Nothing#"
-                  }
-                }
-                upper_bound {
-                  typeRef {
-                    prefix {
-                    }
-                    symbol: "scala/Any#"
-                  }
-                }
-              }
-            }
-          }
-        }
-      }
     }
   }
 }

Automatically load dependency metadata for re2s

At the moment of writing, Rsc does not support reading signature of Scala and Java libraries. Instead, we work around by declaring stubs that represent signatures in textual format. That was good enough for the first prototype, but now we need something better.

Add a comprehensive test corpus for the parser

I think we should do the same thing that we do in scalameta/scalameta - hand-pick a huge corpus of Scala files and write a test that ensures that our scanner and parser work for them in a reasonable fashion.

Create a nightly benchmark infrastructure

We have an automated benchmark infrastructure (bin/bench), but what we also need is something that runs bin/bench every night to make sure that we don't miss performance regressions. Maybe, we would want to also run benchmarks for every pull request, but that may be trickier, since the full suite takes about an hour on quite a powerful machine.

Benchmark NoUid vs Option[Uid]

Both Scalac and Dotty extensively use null objects instead of options. One of the reasons for that, which I heard from someone during my time at EPFL, is performance. Let's quantify this by trying to get rid of NoUid and replacing all Uid-typed fields with Option[Uid].

Specification for Reasonable Scala

In Rsc, we are starting from scratch, and we want to use this opportunity to innovate in documentation. Our current plan is to write up a spec for the subset of Scala that Rsc currently supports and to add features to Rsc only if their implementation is accompanied by a comprehensive spec update.

Input.string leaks file handles

sealed class Input protected (val file: File) extends Pretty {
  lazy val string: String = {
    val codec = scala.io.Codec.UTF8
    scala.io.Source.fromFile(file)(codec).mkString
  }
  ...
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.