koka-lang / koka Goto Github PK
View Code? Open in Web Editor NEWKoka language compiler and interpreter
Home Page: http://koka-lang.org
License: Other
Koka language compiler and interpreter
Home Page: http://koka-lang.org
License: Other
Here are some possibilities:
Use npm. This has the benefit of being familiar to JavaScript programmers.
However, there are some pretty major downsides:
npm is very slow (but npm5 and yarn help out a lot with this)
Every Koka application has to download and build their dependencies individually. In other words, even if a library is used by two different Koka applications, there will be no sharing. This creates a huge amount of wasted space on the harddrive.
It's very easy to use npm for JavaScript dependencies, but it's extremely clunky (at best) for .NET dependencies.
Koka libraries will need to use peerDependencies
for their Koka dependencies, and Koka applications will need to use devDependencies
for their Koka dependencies:
{
"name": "koka-library-1",
"peerDependencies": {
"koka-library-2": "^1.0.0"
}
}
{
"name": "koka-library-2",
"peerDependencies": {
"koka-library-3": "^1.0.0"
}
}
{
"name": "koka-application",
"devDependencies": {
"koka-library-1": "^1.0.0",
"koka-library-2": "^1.0.0",
"koka-library-3": "^1.0.0"
}
}
Notice that koka-application
had to list koka-library-1
, koka-library-2
, and koka-library-3
as devDependencies
, even though it only intended to use koka-library-1
In other words, all direct and transitive dependencies must be specified in Koka applications.
npm and yarn give warnings if you forget to specify a transitive dependency, so it's not completely horrible, but it's still very inconvenient.
And if a Koka library accidentally uses dependencies
or devDependencies
rather than peerDependencies
, then you can easily end up with compile-time errors, dependency hell, and code duplication.
Use Paket. This is similar to the npm approach (with many of the same downsides), except familiar to .NET programmers rather than JavaScript programmers.
Use Stack / Hackage / etc. This is great for Haskell dependencies, but Koka targets JavaScript and C#, so this doesn't seem like a viable option.
Create a custom package manager for Koka. This solves all of the problems that npm has.
This is actually a lot easier than it sounds: PureScript uses a very simple custom package manager which works very well. I can give more details if you want.
Use Nix. The benefit of Nix is that it is a very powerful package manager, which has many very useful features:
Package installation is atomic, and can be rolled back
Packages are purely functional, which means they are fully deterministic: no more situations of "it works on my computer but fails on somebody else's computer"
Packages are cryptographically hashed to ensure correctness, prevent tampering, and increase security
Packages are functions, and therefore they can be parameterized / customized in various ways, which is not possible with most other package managers (including npm)
Supports automatic build caching for libraries (so that libraries / applications do not need to be compiled over and over and over again): you can easily download a pre-compiled Koka library or application.
Unlike most other caching systems, Nix works correctly because it is purely functional
Has a global cache which allows for sharing the same library between multiple applications (this is completely safe, even if there are malicious users using the same computer)
Supports multiple versions of the same library existing simultaneously
Can use Nix, Haskell, npm, and .NET dependencies all within the same application / library.
You can use all existing npm and .NET libraries even if the library doesn't support Nix: it downloads them straight from the npm / NuGet registries.
Cross-platform: Linux and OSX are already supported.
Windows support is more complicated: Nix does work with Cygwin, and it also works with the Windows Subsystem for Linux.
However, I do not use Windows, so I cannot verify how easy it is to use Nix on Windows.
Basically, Nix has already spent a large amount of time and effort solving the very difficult problem of package management.
I think Nix sounds like a very good option. I am willing to spend some time to investigate whether Nix is viable for Koka or not.
Use another existing package manager, like apt, rpm, Snaps, Flatpak, 0install, etc.
As far as I can tell, the only benefit of this approach over Nix is that certain package managers (like 0install) might have better Windows support compared to Nix.
Some other languages (like go and Rust) have a tool which you can use to automatically format source code according to style guidelines.
In addition to formatting code, it would also be useful if the tool could infer Koka types and insert type annotations into the code.
As an example, let's say you had this Koka code:
fun foo(a) {
a + 5
}
After running the format tool, it would replace the code with this:
fun foo(a: int): int {
a + 5
}
If the inferred type is ambiguous, the tool can ask the programmer to choose from a list of possible types.
This gives the convenience of type inference (not needing to write out the types), but also gives the benefits of explicit type signatures (more robust, better documentation, faster compiling, and easier to read)
I'm curious: are there backends other than nodejs for koka? Plans to have alternate backends?
Right now, it's possible to have multiple functions with the same name, as long as their input types differ:
fun foo(a: int): int { a }
fun foo(a: string): string { a }
fun main() {
println(foo(1))
println(foo("1"))
}
However, it's not possible to have functions that differ only in their return types:
fun foo(): int { 1 }
fun foo(): string { "1" }
fun main() {
println(foo())
println(foo())
}
It's also not possible to have values that differ in their types:
val foo: int = 1
val foo: string = "1"
fun main() {
println(foo: int)
println(foo: string)
}
This limitation prevents a lot of useful stuff: empty: a
, parse: string -> a
, from-list: list<a> -> b<a>
etc.
Consider these two files:
module foo
import bar
type foo {
Foo
}
fun to-int(x: foo): int {
match (x) {
Foo -> 1
}
}
module bar
public type bar {
Foo
}
I get this error:
src/foo.kk(11, 5): error: identifier Foo is ambiguous. Possible candidates:
Foo : foo
bar/Foo: bar/bar
hint : give a type annotation to the function parameters or arguments
This seems wrong: I have specified that x
has type foo
, so it isn't ambiguous.
PureScript is written in Haskell, and it uses bin-wrapper
to distribute pre-compiled Haskell binaries: [link]
This makes it possible to use npm install purescript
to easily install PureScript.
A similar technique could be used for Koka.
I suggest the following:
Implement a special function (or effect operator) to obtain the filename / line number / column number of the current function call.
This makes it possible for unit test libraries to provide this information. This makes it easy to determine which unit test failed, and it avoids the need to give names to unit tests, which makes unit tests more convenient.
Implement a special function / effect operator / syntax to selectively run code only on certain platforms (e.g. browser, Node, C#, etc.). This is important because some code / unit tests is designed to only be run on certain platforms.
The standard library should include a good unit testing library which uses the above two features.
This means that all Koka projects have access to unit testing without needing to install a third-party library.
This is really important: the more convenient it is to write unit tests, the more likely people are to actually write unit tests.
I think it is important that it is possible to write the unit tests in the same file as the code that they are testing.
An API for the unit tests might look something like this:
// In foo.kk
var foo: int = 1
fun tests(): test () {
assert(foo == 1)
}
// In bar.kk
var bar: int = 2
fun tests(): test () {
assert(bar == 2)
}
// In test.kk
import foo
import bar
fun main(): asyncx () {
run-test(interleaved([
foo/tests,
bar/tests
]))
}
So we have a test
effect, which has the assert
operator. The assert
operator checks if its argument is true
, and if not it fails. When it fails, it logs the filename, line number, and column, which makes it easy to identify which assert
failed.
The run-test
function runs a unit test, and interleaved
is used to run the unit tests in parallel. This can be shortened to run-tests([...])
which does the same thing.
Each module can export a tests
function which runs the unit tests for that module. It doesn't have to be called tests
, that's just a convention.
Dead code elimination will remove the tests
function from production builds, so it doesn't have any performance cost when not running unit tests.
We'll probably also need various mocks for some things, and fuzz testing (e.g. QuickCheck), but QuickCheck might be difficult to implement due to the lack of typeclasses.
Right now, [1, 2, 3]
is always a list
, but it would be very nice if we could use that exact same syntax for vector
and array
as well. It would use the types to disambiguate.
Because array
is mutable, maybe it doesn't need the []
syntax. But vector
definitely needs the syntax, because it's very annoying (and inefficient!) to create literal vectors right now.
It would be especially nice if users could define their own custom types and have the []
syntax work on them, similar to the OverloadedLists extension in Haskell, but that can be implemented later. I'm mostly concerned about vector
and array
right now.
I never liked Haskell's naming convention for the Maybe
type.
I propose to rename the constructors to Some
and None
:
type maybe<a> {
con None
con Some( value : a )
}
This has some advantages:
Shorter.
They have the same number of characters (4), so it looks better:
match (a) {
Just(value) -> Just(value)
Nothing -> Nothing
}
match (a) {
Some(value) -> Some(value)
None -> None
}
fun foo() {
if (bar)
then Just(1)
else Nothing
}
fun foo() {
if (bar)
then Some(1)
else None
}
Just
sounds very strange to me. If read out loud, Just(a)
means that it's "just an a
", which doesn't make sense. Some
sounds more natural to me, since it's a short form of "something"
Rust / OCaml / F# all use Some
/ None
Disadvantages:
It looks worse when pattern matching with a single-character variable (in this case b
):
match (a) {
Just(b) -> Just(b)
Nothing -> Nothing
}
match (a) {
Some(b) -> Some(b)
None -> None
}
Haskell / PureScript / Elm all use Just
/ Nothing
Given the following effect signature:
effect stage<r :: V -> V> {
lift (value: a): r<a>;
}
is it possible to write any handler, at all? It appears that all type annotations to the following attempt do not suffice to convince the type checker:
struct id<a>(value: a)
val idHandler: forall<a, e> (() -> <stage<id>|e> a) -> e a = handler {
lift (v) -> resume(id(v))
}
it will always complain with:
error: types do not match
context : resume(id(v))
term : id(v)
inferred type: $b
expected type: _a<$b>
Thanks for any help :)
Currently, the trace
function requires its argument to be a string. However, there are many things that you might want to log to the console which can't be (easily) converted to strings.
As such, I recommend changing the type of trace
to forall<a> (value: a) -> ()
In addition, I suggest having the trace
function automatically add in the filename, line number, and column number where the trace
function is called.
So if you use trace([1, 2, 3])
it will log something like this:
trace: [1, 2, 3]
file: path/to/foo.kk, line 10, column 3
This makes it much easier to determine which trace output is which when there are multiple calls to trace
.
There should probably be a second version of trace
which accepts a message parameter, so if you use trace("some message", [1, 2, 3])
it will log something like this:
trace: some message [1, 2, 3]
file: path/to/foo.kk, line 10, column 3
In my opinion dead code elimination is absolutely crucial.
It means that you can use a large library without worrying about file size, because unused functions are removed.
That means library authors can focus on making a good library with a lot of useful features, rather than obsessing over file size (e.g. see the fetish over "micro libraries" in the JS world).
This is especially important in the JS world, because files are served over HTTP, so larger file sizes means longer load times for the site.
Because this is so important, there already exist dead code eliminators for JavaScript, the most popular are Rollup and Google Closure. Webpack has a dead code eliminator as well, but it's currently really bad.
Rollup and Closure are pretty good, but because they are designed to work with JavaScript, and JavaScript has arbitrary side effects anywhere, they have to be conservative. That means that there are many situations where the code is dead, but they fail to remove it.
Koka can do much better, because all top-level variables in Koka are pure. That means that it's trivial to write a dead code eliminator for Koka, and it also means that Koka's dead code eliminator can do a much better job than JS dead code eliminators.
Another benefit is that Koka's dead code eliminator can work with all backends (JS and C#), rather than being specific to JS.
What's the resume function?
Hey Daan! Got inspired to try Koka, but I get the installation error below.
I don't know why cabal isn't used, so I'm a bit puzzled about how to continue from here:
[12:45|~/build/koka]
$ sudo jake
build: koka 0.8.0-dev (debug version)
> ghc -c src/Lib/Printer.hs -fwarn-incomplete-patterns -iout/debug -odir out...
/debug -hidir out/debug
src/Lib/Printer.hs:37:1: error:
Failed to load interface for ‘Data.Text’
Perhaps you meant Data.Set (from containers-0.5.7.1)
Use -v to see a list of the files searched for.
src/Lib/Printer.hs:38:1: error:
Failed to load interface for ‘Data.Text.IO’
Use -v to see a list of the files searched for.
command failed with exit code 1.
[12:45|~/build/koka]
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 8.0.1
[12:45|~/build/koka]
$ git show | head
commit d1eb97825983d3ba1af37eca60c8dcca5b12a982
Author: daanx <[email protected]>
Date: Mon Feb 13 15:06:56 2017 -0800
parse qualified identifiers and operators first
Cheers!
This is super low priority, but eventually it would be great to have WebAssembly support.
WebAssembly is a statically-typed binary format for running code on the web. It can achieve close to native C performance.
WebAssembly is very low-level, at the same level as C: it only supports integers, floats, and top-level function pointers.
All memory must be allocated in a single large array, which behaves like virtual memory. There is no garbage collector, instead all memory must be manually allocated/freed.
There are various planned features including thread support (threads are very unlikely to ever be implemented in JavaScript, so WebAssembly is the only way to access threads).
JavaScript code can import a WebAssembly module by using a regular ES6 import
, and WebAssembly can import JavaScript code as well. WebAssembly is designed to integrate well into the JavaScript ecosystem.
Because it's so close to "the metal", and you have very fine control over everything (including memory management), and there's no overhead from the JS JIT, Koka should be significantly faster if it is compiled to WebAssembly.
C, C++, LLVM, and Rust can already compile to WebAssembly, with many other languages adding in support as well. There's also some useful tools which may help.
The biggest impediment to compiling Koka to WebAssembly is that WebAssembly does not currently support garbage collection, therefore Koka would have to implement its own custom garbage collector (or use an existing garbage collector written for C/C++)
Eventually garbage collection will be supported in WebAssembly, and then it will be a good time to revisit this issue.
There are two broken situations:
The file path breaks if it's absolute. This command:
koka --compile --library --outdir="build" --target="js" "/absolute/path/to/koka/lib/toc.kk"
Fails with this error:
error: could not find: /absolute/path/to/koka/lib/toc.kk
search path: <empty>
The --include
flag breaks if it's absolute. This command:
koka --include="/absolute/path/to/koka/lib" --compile --library --outdir="build" --target="js" "toc.kk"
Fails with this error:
error: could not find: toc.kk
search path: /absolute/path/to/koka/lib
This is causing problems for Nix, because Nix stores everything in /nix/store/
, so the Koka compiler can't find any files which are managed by Nix.
I think it would be nice to support partial application for everything.
For lack of a better syntax, I suggest to use _
for partial application:
_(1, 2) == fun(x) { x(1, 2) }
foo(_, 1) == fun(x) { foo(x, 1) }
foo(1, _) == fun(x) { foo(1, x) }
foo(_, _) == fun(x, y) { foo(x, y) }
foo._ == fun(x) { foo.x }
_.foo == fun(x) { x.foo }
_._ == fun(x, y) { x.y }
_[1] == fun(x) { x[1] }
foo[_] == fun(x) { foo[x] }
_[_] == fun(x, y) { x[y] }
And similarly for the various other operators:
_ || 1 == fun(x) { x || 1 }
1 || _ == fun(x) { 1 || x }
_ || _ == fun(x, y) { x || y }
_ + 1 == fun(x) { x + 1 }
1 + _ == fun(x) { 1 + x }
_ + _ == fun(x, y) { x + y }
etc.
This is currently not feasible, and it's also super low priority, but it would be nice if the Koka compiler was eventually written in Koka.
The PureScript compiler (which is also written in Haskell) uses this library to generate source maps.
The types look pretty straightforward, but it might take a bit of effort to keep track of both the Koka position info and the JavaScript position info.
There is also a JavaScript library which makes it quite easy to generate source code + source maps.
() -> a = a
Haskell:
main :: IO ()
Koka:
main : () -> io ()
build: koka 0.8.0-dev (debug version)
> mkdir -p out/debug/Platform
> ghc -c src/Platform/cpp/Platform/cconsole.c -fwarn-incomplete-patterns -io...
> ghc -c src/Platform/cpp/Platform/Config.hs -fwarn-incomplete-patterns -iou...
> ghc -c src/Platform/cpp/Platform/Runtime.hs -fwarn-incomplete-patterns -io...
> ghc -c src/Platform/cpp/Platform/Var.hs -fwarn-incomplete-patterns -iout/d...
> ghc -c src/Platform/cpp/Platform/Console.hs -fwarn-incomplete-patterns -io...
> ghc -c src/Platform/cpp/Platform/ReadLine.hs -fwarn-incomplete-patterns -i...
> ghc -c src/Platform/cpp/Platform/GetOptions.hs -fwarn-incomplete-patterns ...
> ghc -c src/Platform/cpp/Platform/Filetime.hs -fwarn-incomplete-patterns -i...
> ghc -c src/Lib/Printer.hs -fwarn-incomplete-patterns -iout/debug -odir out...
/debug -hidir out/debug
src/Lib/Printer.hs:37:1: error:
Failed to load interface for ‘Data.Text’
Perhaps you meant Data.Set (from containers-0.5.7.1)
Use -v to see a list of the files searched for.
src/Lib/Printer.hs:38:1: error:
Failed to load interface for ‘Data.Text.IO’
Use -v to see a list of the files searched for.
command failed with exit code 1.
Using
▲ koka (master) ghc --version
The Glorious Glasgow Haskell Compilation System, version 8.0.2
on macOS (ghc
installed from brew
)
My npm ls
:
▲ koka (master) npm ls
[email protected] /Users/rauchg/Documents/Projects/koka
├── [email protected]
├─┬ [email protected]
│ └── [email protected]
├─┬ [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ └─┬ [email protected]
│ ├── [email protected]
│ └── [email protected]
├── [email protected]
└─┬ [email protected]
├── [email protected]
└── [email protected]
and jake
:
▲ koka (master) jake --version
8.0.15
Should I be attempting to build master
, or do you recommend I go with a certain tag or commit?
Reproducible code in the REPL:
> fun foo(): () -> () { (fun() {}) }
^
((2),18): error: function is applied to too many arguments
context : () { (fun() {}) }
term : ()
inferred type: (()) -> ()
It's possible to workaround this issue by wrapping the type in parens:
fun foo(): (() -> ()) { (fun() {}) }
When running this command:
koka --include="lib" --compile --library --outdir="build" --target="js" "toc.kk"
The Koka compiler very rapidly consumes 4 GiB of RAM.
When I enter []
into the REPL, I get this error:
(1, 0): error: the type of 'main' must be a function without arguments
expected type: () -> io ()
inferred type: forall<a> () -> list<a>
When I enter [[1]]
into the REPL, I get this error:
(1, 0): error: no 'show' function defined for values of type: list<list<int>>
This is somewhat related to #20. It can be solved by having the REPL automatically pretty-print all built-in types.
It would also be nice if it were possible to create pretty printers for custom types, but that will probably require some sort of typeclass mechanism (#16)
Rust and C# have the ability to annotate a function as "unsafe".
The purpose of this is that unsafe
functions can only be called from within unsafe
functions, which prevents the unsafeness from contaminating safe functions.
But if the programmer has carefully verified that it's safe to call an unsafe
function, they can use an unsafe
block to call an unsafe
function inside of a safe function.
This prevents programmers from accidentally using unsafe functions, but still gives them the option of explicitly using unsafe functions after manually verifying that it is safe to do so.
We can achieve this in Koka using the effect system:
type unsafe :: X
// TODO this should be fully inlined, it should not have any performance cost
extern inline unsafe-to-safe(action: () -> <unsafe|e> a): e a {
// TODO is the C# correct ?
cs inline "#1()"
js inline "#1()"
}
Now we can create unsafe functions:
fun foo(a: int): unsafe int { a }
Koka will prevent us from using unsafe functions except inside of other unsafe functions. But we can use unsafe-to-safe
to call an unsafe function from inside of a safe function:
fun bar(): int {
unsafe-to-safe {
foo(1)
}
}
Of course unsafe-to-safe
should not be used lightly: it should only be used after the programmer has proven that it is safe to use (or if the programmer has decided that the unsafety is worth it, e.g. for improved performance).
I propose that the unsafe
effect and unsafe-to-safe
function be implemented in core, and also that existing unsafe-*
functions be changed to use the unsafe
effect.
One concern I have is how the unsafe
effect interacts with higher-order functions like map
:
[1, 2, 3].map(foo)
In this case we are passing the unsafe
function foo
to map
, and this works because map
has a polymorphic effect type.
But map
is not unsafe
, and we did not use unsafe-to-safe
, so this seems to violate the expectations for unsafe
. Maybe this is okay, I'm not sure.
hey, I try to build koka by executing jake
in windows
and get a error below:
check for packages: random text parsec
build: koka 0.8.0-dev (debug version)
build ok.
> out\debug\koka-0.8.0-dev.exe --outdir=out\lib -ilib -itest/algeff -itest/lib
invalid argumentjake aborted.
Error: Process exited with error.
at api.fail (C:\Users\hchs8\AppData\Roaming\npm\node_modules\jake\lib\api.js:336:18)
at Exec.<anonymous> (C:\Users\hchs8\AppData\Roaming\npm\node_modules\jake\lib\utils\index.js:124:9)
(See full trace by running task with --trace)
I try to run koka-0.8.0-dev.exe directly, but get "Invalid argument" no matter what I input.
Right now, Ctrl+D displays the message end of file
and does not quit the REPL. It is common practice for REPLs to quit when using Ctrl+D (at least on Linux).
However, it should only quit when the text prompt is empty. In other words:
> (Ctrl+D pressed)
In the above scenario, it should exit. However:
> some code or whatever (Ctrl+D pressed)
In the above scenario, it should not exit. Instead it should treat it as meaning "I am finished typing this expression, please parse/compile/execute it now".
With a static type system, there are some situations where the error message isn't good enough to help you find where the bug is. In those situations, you feel extremely helpless and frustrated, because there's no way to debug your program to find the bug!
We can instead treat compile time errors as warnings and output JS files even if there is a type error. This allows you to use standard runtime debugging techniques (trace
, breakpoints, etc.) to find the bug.
This feature is important enough that GHC decided to implement it (with the corresponding paper here, in particular see section 8). The very cool thing about this feature is that it's safe: when a compile-time error happens, it will insert a runtime error into the code. In other words, compile-time errors are converted into runtime errors.
As an example, consider this Koka program:
fun foo(): () {
trace("1" + 0)
}
That gives this compile-time error:
src/foo.kk(2,15): error: types do not match
context : "1" + 0
term : 0
inferred type: int
expected type: string
Instead, it should be a compile-time warning, and it continues with compilation.
After compilation finishes, it will generate this JS code:
function foo() /* () -> () */ {
return $std_core.trace((("1") + ($std_core.throw_1("src/foo.kk(2,15): error: types do not match\n context : \"1\" + 0\n term : 0\n inferred type: int\n expected type: string"))));
}
Notice that it throws a runtime error (at the same place where term
is), and that the runtime error is exactly the same as the compile-time warning.
So we get compile-time warnings, and runtime errors.
The warnings are good because they give the programmer confidence that their code is correct: after all the warnings are fixed, the program is type-safe.
The runtime errors are good because they preserve type safety: even though compile-time errors are now warnings, the program will still behave sanely at runtime: you won't get weird undefined behavior.
A program which gives a compile-time warning will still throw a runtime error even if the runtime types are correct: this isn't dynamic typing. So there's no runtime cost for type checks. And you can't be lazy and ignore the compile-time warnings forever, because your program will still break at runtime.
This gives the benefits of static typing while still allowing for debugging the program at runtime. This is useful for experts, but it's especially useful for beginners who are often overwhelmed by a static type system.
In addition to implementing this system, I propose that it be the default setting for Koka. This helps Koka to feel a lot friendlier and more dynamic, which will help to attract people from dynamically-typed languages (as an example, TypeScript gives warnings and outputs JS code even when the static types are wrong, this is very useful in practice).
But unlike dynamic languages, Koka's type system is strict (no type coercions or weird undefined behavior), and we still get compile-time warnings to help refactor and fix bugs, so we get the best of both worlds.
One of my concerns is that this system might produce too many warnings, because the compiler doesn't halt on the first error, instead it prints all of the errors. I'm not sure what the best solution is, but maybe it isn't a problem in practice.
Another concern is that this might encourage programmers to be lazy and not fix bugs. This is especially true if Koka ends up being popular with JavaScript programmers rather than Haskell programmers, because most JavaScript programmers use the "worse is better" approach and don't care much about correctness.
This is a minor concern for me, but I don't think it's a huge deal, because Koka's type system and overall design is very clean and strict, and you still get runtime errors, so it still forces programmers to (eventually) fix bugs.
I think it would be nice if we could have compile-time errors for libraries, and compile-time warnings for applications. That prevents Koka library authors from being lazy, while still allowing for ease of development.
When in koka's
interpreter, typing anything in gives "could not find requirejs" warning/error (twice), after which the normal result is printed, e.g.
> 2
could not find requirejs
could not find requirejs
2
>
On the other hand, typing in shell:
$ node out/lib/interactive.js
2
$ nodejs out/lib/interactive.js
2
$
So it doesn't complain about not being able to find requirejs
.
On my PC, node
is an alias for nodejs
, and nodejs
version is v4.2.6. Also, my $NODE_PATH
is /usr/local/lib/node_modules:/usr/local/lib/node_modules/requirejs
.
Setting NODE_PATH
to koka's
dir doesn't help.
How to get the warning go away?
It would be useful to be able to mark a function as inline
, like this:
inline fun foo() {
1 + 2
}
Now every time foo()
is used, the compiler will replace it with 1 + 2
This avoids the overhead of the function call, which provides a small speed increase for small functions.
It also enables many other optimizations (including stream fusion), and therefore it can significantly increase the speed of the program.
If the function is always called (and not used as a value), then the dead code eliminator can remove the function definition, which can reduce code size.
Functions with arguments can be inlined as nested match
expressions, as follows:
inline fun foo(a) { a + a }
inline fun bar(a = 1) { a + a }
inline fun qux(a, b, c) { a + b + c }
foo(1) # match (1) { a -> { a + a } }
bar() # match (1) { a -> { a + a } }
bar(2) # match (2) { a -> { a + a } }
qux(1, 2, 3) # match (1) { a -> match (2) { b -> match (3) { c -> { a + b + c } } } }
Some care needs to be taken to ensure that local variables do not shadow outer variables:
val foo = 1
inline fun bar() { foo + foo }
fun qux() {
val foo = 2
bar() # This needs to refer to the outer variable foo, not the local variable foo
}
An easy way to achieve this is to give every local variable a unique identifier which is guaranteed to not conflict with top-level identifiers.
If this feature is implemented, the compiler can use the generic inlining mechanism to solve issue #18, therefore this issue is a superset of issue #18
The Koka papers mention iteration, but the standard library doesn't seem to define any iterators.
I propose that iteration be defined in the standard library, and all of the collection types should define iterators.
I also propose that a robust set of iterator combinators be defined, e.g. map, filter, take, drop, etc. These combinators should work on all iterators.
In addition, it should be easy to convert from one collection type to another, by converting from type Foo to an iterator, then from the iterator to type Bar:
// Convert from list to vector
vector( iter( [] ) )
// Convert from vector to list
list( iter( vector() ) )
// Convert from list to dict
dict( iter( [ ("foo", 1), ("bar", 2) ] ) )
This should replace the various ad-hoc conversion functions which currently exist.
Java, TypeScript, F#, Rust, Clojure, and OCaml all support some way of attaching metadata to things.
This metadata can serve many purposes:
Attaching documentation to a variable
Specifying that a variable should only be accessible for certain compilation targets (e.g. JS or C#)
Marking a variable as deprecated (and specifying a deprecation message which is printed if the variable is used)
Marking a function as inline
Marking a function as being a "test" function: this function will be automatically run when running unit tests, and it won't be run in production
Specify that a variable should be exported to the compilation target (e.g. JS or C#) and therefore the compiler shouldn't mangle its name
Specify that the variable's name in the compilation target should be different from the variable's name in Koka
Specify the compilation strategy for a particular datatype (e.g. you could specify that an ADT should be compiled into JavaScript classes)
Specify how a datatype should be marshalled to/from the compilation target
Rather than creating new syntax for each of these situations, it's useful to create a single generic "annotation" syntax which can cover all of those use cases (and more).
It's even possible to allow for user-created annotations, allowing the user to specify whatever metadata they want.
The syntax for Koka might look something like this:
@[inline, test, deprecated("use bar instead"), target(js)]
fun foo() {}
The above specifies the inline
, test
, deprecated
, and target
attributes for the foo
function.
You could also specify them separately:
@[inline]
@[test]
@[deprecated("use bar instead")]
@[target(js)]
fun foo() {}
Of course the above syntax is just a suggestion, there are many possibilities.
Consider this Koka code:
module foo
type foo {
Foo(bar: int, qux: int)
}
fun main(): console () {
val a = Foo(1, 2)
val b = a(3)
println(b.bar)
println(b.qux)
println(match (b) {
Foo(bar) -> bar
})
}
This compiles into this JS code:
function Foo(bar, qux) /* (bar : int, qux : int) -> foo */ {
return { bar: bar, qux: qux };
}
function bar(foo) /* (foo : foo) -> int */ {
return foo.bar;
}
function qux(foo) /* (foo : foo) -> int */ {
return foo.qux;
}
function _copy(_this, bar0, qux0) /* (foo, bar : ?int, qux : ?int) -> foo */ {
var _bar_22 = (bar0 !== undefined) ? bar0 : bar(_this);
var _qux_28 = (qux0 !== undefined) ? qux0 : qux(_this);
return Foo(_bar_22, _qux_28);
}
function main() /* () -> console () */ {
var a = Foo(1, 2);
var b = _copy(a, 3);
$std_core.println_1(bar(b));
$std_core.println_1(qux(b));
var _x0 = b.bar;
return $std_core.println_1(_x0);
}
The Foo
, bar
, qux
, and _copy
functions should all be inlined:
function main() /* () -> console () */ {
var a = { bar: 1, qux: 2 };
var b = { bar: 3, qux: a.qux };
$std_core.println_1(b.bar);
$std_core.println_1(b.qux);
var _x0 = b.bar;
return $std_core.println_1(_x0);
}
This should significantly improve the performance and file size (especially with the _copy
function).
Hello all,
I was playing with Koka and I was surprised by the following behaviours.
Consider the program below.
public module playground/pgm
effect free { ask () : a }
fun mtry(c) {
handle(c) {
ask() -> resume("abc")
}
}
fun h () : free int { ask() }
As expected, the type of h
is:
> :l playground/pgm.kk
> :t h
() -> free int
Now, if we define x
as follows:
> val x = mtry{h()}
int
its type is int
but its value "abc".
> x
abc
I'm not familiar with algebraic effects nor with the Koka language but I think
the previous example breaks the subject reduction property.
We can also use the following example to get a run-time type error.
> val y = mtry{h()} + 1
y : int
> y
Error: Invalid integer: abc ...
I'm not sure, but my guess is that the definition of mtry
should not
be accepted since the type a
escapes its scope. OCaml applies the same mechanism when
one wants to use the following GADT declaration.
# type t = Ask : 'a -> t;;
type t = Ask : 'a -> t
# let f = function Ask x -> x;;
Error: ...
Ps: we have the same results with the dev branch.
Right now Koka uses Require.js, which is quite outdated. Modern JS uses Webpack or Rollup, which bundles multiple files into a single optimized file.
Both Webpack and Rollup support ES6 modules, and it's recommended to use ES6 modules because it allows Webpack/Rollup to remove unused variables.
Both Webpack and Rollup support a plugin system, which we can use to add in support for Koka. However, I think Webpack's plugin system is more mature and powerful.
Both Webpack and Rollup support a "watch mode" which means that whenever a file changes, it will automatically recompile your project, but it will only recompile the files that changed, so it's much faster than a full compile. However, I've found Rollup's watch mode to be a bit flaky: Webpack's watch mode is rock solid.
Rollup produces extremely small code, because it merges all files into a single function scope, whereas Webpack creates one function per file. However, Webpack 3 will also merge multiple files into a single function scope. This puts Webpack on par with Rollup.
Webpack has many many more features than Rollup, but Rollup is more lightweight.
Webpack is much more maintained than Rollup: the author of Rollup is very sporadic with updates and improvements.
As such, I recommend using Webpack as the official Koka bundler. In order to support watch mode, the Koka compiler will need the ability to recompile a single file which has changed, without recompiling files which haven't changed.
There's various strategies for this, such as placing the compiler artifacts (.js
and .kki
) into a temporary folder, and then reusing existing artifacts which haven't changed. PureScript uses that strategy.
Alternatively, Koka could run as a daemon in the background, so that way it can store the artifacts in memory. Fable uses that strategy.
Using a database to store the artifacts is another possibility, which would allow for transactions. This can result in more correct builds, because the file system is not transactional.
Here's the reduced test case:
module foo
import std/async
val uncancelable = handler {
await(setup, n) -> resume(await(setup, n))
cancel() -> resume(False)
exit(mbexn) -> exit(mbexn)
outer-async() -> resume(outer-async())
}
I get the following error:
src/foo.kk(6, 3): error: types do not match
context : await(setup, n)
term : await
inferred type: (setup : (cb : (std/async/await-result<_b>) -> io ()) -> io (), wid : std/async/wid) -> std/async/async std/async/await-result<_b>
expected type: ((cb : (std/async/await-result<$a>) -> io ()) -> io (), std/async/wid) -> std/async/async std/async/await-result<_b>
because : operator type does not match the parameter types
The types are the same, except the inferred type has the type variable _b
and the expected type has the type variable $a
Rather than creating a separate plugin for each editor, we can probably use the Language Server Protocol. That way we only need to do the work once and every editor will be supported.
The first language I learned was JavaScript, and I've been using it for over 10 years.
But I've grown sick of it, so I've tried many different compile-to-JavaScript languages, including Haxe, Roy, ClojureScript, TypeScript, Elm, F#, and PureScript.
They are all good languages, but none of them had all the requirements I was looking for in a language:
Purely functional
Heavily encourages functional style
Has ADTs and pattern matching
Statically typed with a sound type system
Good syntax
Good module system
Generates extremely small, fast, and efficient JavaScript code
Really good FFI to JavaScript
The functional languages I tried all fail at the last two points (often because of currying).
Koka is different, though. It fulfills all my requirements (and then some). It's a fantastically well designed language. Very minimal, clean, concise, safe, fast, excellent FFI, and a lot of convenient features. As a Lisp programmer, I also really like being able to use -
and ?
in identifiers.
I'm especially impressed by the excellent error messages, which is something even production languages fail at. And the standard library is extremely high quality as well. I definitely wasn't expecting that from a research language.
Therefore, I plan to rewrite a ~10,000 line JavaScript program in Koka.
I'm also willing to help out to improve Koka. What's the current status of Koka? It seems to still be regularly updated, which is good, but are there any long-term plans?
Is there anything I can help out with, for example improving the documentation?
This code currently does not work:
fun foo((a, b)) {
a + b
}
val (c, d) = (1, 2)
It would be nice to be able to pattern match on function arguments, and also with val
This would have the same behavior as match
, including throwing exceptions when the match is not exhaustive.
Thank you for this amazing language! I am intrigued about interactions
between algebraic effect handlers and coinductive types, which is interesting in the context of reactive programming.
The effect inference seems to reject simple functions on coinductive streams, e.g.,
cotype stream<a> {
SCons(hd: a, tl: stream<a>)
}
fun hist_helper(s: stream<a>, state: list<a>): stream<list<a>> {
val next = Cons(s.hd, state)
SCons(next, hist_helper(s.tl, next))
}
fun stake(s: stream<a>, n: int): list<a> {
if (n <= 0) return Nil
else return Cons(s.hd, stake(s.tl, n - 1))
}
Both functions are rejected:
error: effects do not match
...
inferred effect: total
expected effect: <div|_e>
because : effect cannot be subsumed
From my understanding of coinductive types, I would expect that the inferred total
effect is correct in both cases. Why is divergence expected? Can we massage the definitions so that they are accepted as total
functions? Thanks for any help.
Consider this Koka module foo.kk
:
module foo
extern include {
js file "foo-inline.js"
}
public extern foo(fn: () -> e a): e a {
js "foo_inline"
}
fun main() {
foo { 1 }
}
And this JavaScript file foo-inline.js
:
function foo_inline(f) {
return f();
}
When compiling, I get this output:
function foo_inline(f) {
return f();
}
function _cps_foo(fn, _k) /* forall<a,e> (fn : () -> e a) -> e a */ {
return foo_inline(fn);
}
function _fast_foo(fn) /* forall<a,e> (fn : () -> e a) -> e a */ {
return foo_inline(fn);
}
function foo(fn, _k) /* forall<a,e> (fn : () -> e a) -> e a */ {
return ((_k !== undefined)) ? _cps_foo(fn, _k) : _fast_foo(fn);
}
function main() /* () -> int */ {
return foo(function() {
return 1;
});
}
Depending on whether the effect is CPS or not, it will call either _fast_foo
or _cps_foo
.
However, there is a problem: _cps_foo
calls foo_inline
, but it doesn't pass in the _k
CPS argument. This means that foo_inline
cannot work for CPS effects.
This can be solved by changing _cps_foo
to pass in the _k
parameter.
Or alternatively it could be solved by adding in a new mechanism to specify different behavior for CPS vs non-CPS:
public extern foo(fn: () -> e a): e a {
js "foo_inline"
js cps "foo_inline_cps"
}
This would also work with inline:
public extern foo(fn: () -> e a): e a {
js inline "foo_inline(#1)"
js inline cps "foo_inline_cps(#1, #2)"
}
Personally, I prefer this solution.
I tried running some of the code from the Koka paper:
effect yield<a> {
yield( item: a ) : ()
}
fun foreach(f : a -> e bool, act : () -> <yield<a>|e> ()): e () {
handle (act) {
return x -> ()
yield(x) -> if (f(x)) then resume(()) else ()
}
}
However, I get this error:
error: types do not match (due to an infinite type)
context : handle (act) {
...
}
term : act
inferred type: () -> <cps,yield<_a>|_e> ()
expected type: () -> <yield<_a>|_e> _b
hint : annotate the function definition?
Adding cps
to the effect does not help at all.
Reproducible code in the REPL:
> fun foo() { fun() {} }
^
((1),13): error: invalid syntax
unexpected keyword fun.anon
expecting ";", "fun", "function", "val", "var", expression, "return" or "}"
It's possible to workaround this issue by wrapping the lambda in parens:
> fun foo() { (fun() {}) }
I think all binary numeric types should have the form ${format}${bits}
, e.g. int32
, uint32
, int64
, float32
, float64
, float128
, etc.
The ability to have multiple functions with the same name and different types is very nice, but there are still situations where ad-hoc polymorphism is useful.
An example is a function which calls map
, it would be nice if it worked with any type which implements map
There are many ways of implementing ad-hoc polymorphism: Haskell typeclasses, ML modules, and creating multiple copies of each function (used by Rust and others).
I propose a simplified typeclass mechanism for Koka.
Any variable can be marked as implicit
:
implicit val foo: int = 1
implicit fun bar(): int { 2 }
Function parameters can also be marked as implicit
:
fun qux(a: implicit int): int { a }
When a function has implicit
arguments and it is called, it will search for implicit
variables which are in scope, and if there is an implicit
variable which matches the type, it will then automatically pass it in:
// This will be compiled into qux(foo)
qux()
It's also possible to pass in implicit
arguments explicitly:
qux(implicit foo)
It's also possible to convert from implicit
variables to regular variables, and from regular variables to implicit
variables:
implicit val foo: int = 1
val bar: int = foo
implicit val qux: int = bar
And that's it. That's the entire system. With this, it's possible to create Haskell typeclasses by combining type
with implicit
:
type map<t> {
Map(map: forall<a, b>(t<a>, (a) -> b) -> t<b>)
}
fun map(i: implicit map<t>, l: t<a>, f: (a) -> b): t<b> { (i.map)(l, f) }
implicit val map-list: map<list> = Map(
map = fun(a, f) { ... }
)
// This is the same as map(implicit map-list, [1, 2, 3], fun(x) { x + 1 })
map([1, 2, 3], fun(x) { x + 1 })
The above code is equivalent to this Haskell program:
class Map t where
map :: t a -> (a -> b) -> t b
instance Map [] where
map a f = ...
map [1, 2, 3] \x -> x + 1
The Koka code is quite heavy-weight on the syntax. This can potentially be solved with the following trick:
rectype map<t> {
Map(map: forall<a, b>(implicit map<t>, t<a>, (a) -> b) -> t<b>)
}
implicit val map-list: map<list> = Map(
map = fun(_, a, f) { ... }
)
map([1, 2, 3], fun(x) { x + 1 })
The benefits of this light-weight typeclass system is that it is easy to implement and it's easy for the user to understand. It's the same as regular function arguments... except automatically passed in by the compiler.
It also gives a lot of the same power of ML modules, because it's possible to pass in implicit
arguments explicitly, create implicit
variables dynamically, and use locally scoped implicit
variables.
One downside is that it's not possible to define union
for maps or sets, because they assume that there is only a single typeclass instance per type.
Instead, you need to store the Ord
instance in the data structure itself and use unionLeft
or unionRight
(which might be slower than union
). This is possible because implicits can be converted into regular values and then stored in data structures.
Another downside is that I'm not sure how to define advanced features like functional dependencies or type families. But maybe the simplicity of implicit
is more important than advanced type features.
Is it possible to write structs (or any types for the matter) with mutable fields in Koka? How would one do that?
I understand I can use the state monad/effect for whole variables, but could I simulate mutating a field through a reference to the whole struct as one would do in an imperative language?
This code:
match (1) { a -> a }
Fails with this error:
Backend.JavaScript.FromCore.genMatch: no branch in match statement: [_x0]
This innocent-looking program:
effect gen<a> {
yield(item: a): ()
}
fun list<a,e>(program : () -> <gen<a>, e> ()): e list<a> {
handle(program) {
yield(x) -> Cons(x, resume(()))
return _ -> Nil
}
}
yields:
*** internal compiler error: Type.Unify.labelName: label is not a constant
There is a proposal for adding BigInts to JavaScript. After the proposal is standardized, we should add some bindings to it.
In addition to supporting unlimited size integers, the proposal also gives some functions for creating fixed size integers (e.g. Int64 / Uint64). We should add in support for those as well.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.