joelburget / lvca Goto Github PK
View Code? Open in Web Editor NEWlanguage verification, construction, and analysis
Home Page: https://lvca.dev
License: MIT License
language verification, construction, and analysis
Home Page: https://lvca.dev
License: MIT License
Make agnostic to primitives
Identifiers (constructors, variables, structures, type names, …) are written using underscores to separate words, not in CamelCase. So write num_apples not numApples and Foo_bar, not FooBar
EG AbstractSyntax
-> Abstract_syntax
.
Undecided because the OCaml standard library uses CamelCase.
I'm working on adding kind checking. One thing I don't think I'll get to right now, but would be nice, is optional kind annotations for sorts in abstract syntax declarations. I'd like to enable them both at the top level and when declaring an argument to a newly-defined sort. Example:
primitive : *
list : * -> *
foo (a : *) := foo(list a)
bar b := bar(b primitive) // note that annotations are still optional, `b : * -> *` is inferred.
We fail to lex strings with escapes, eg "\\"
.
See ocaml lexer for example: https://github.com/ocaml/ocaml/blob/trunk/lex/lexer.mll
I think it's important to have a core language that serves as a good default target.
My initial attempt looks like this:
type core =
(* first four constructors correspond to regular term constructors *)
| Operator of string * core_scope list
| Var of string
| Sequence of core list
| Primitive of primitive
(* plus, core-specific ctors *)
| Lambda of sort list * core_scope
| CoreApp of core * core list
| Case of core * core_scope list
(** A metavariable refers to a term captured directly from the left-hand-side
*)
| Metavar of string
(** Meaning is very similar to a metavar in that it refers to a capture from
the left-hand-side. However, a meaning variable is interpreted. *)
| Meaning of string
This issue is here to braindump some thoughts and references.
The first question to answer is what exactly is this core language for? It's to unambiguously define the semantics of a language (via translation to core). It's nice if we can do other things like step it with a debugger, but that's secondary.
Two important concerns, fairly unique to this project, are inclusion of terms from other languages and computational primitives.
By "terms from other languages" I mean that denotational semantics (in general / in LVCA) is about translating from language A
to B
. When using core, this is specialized to a translation from A
to Core(B)
, where Core(B)
is core terms with terms from B
embedded. As an example, a case expression in Core(bool)
:
case(
true();
[ true(). false(), false(). true() ]
)
Some of the syntax is up for debate, but the point is that this is the equivalent of (OCaml) match true with true -> false | false -> true
, but where booleans are not built in to core at all, they're from the language embedded in core.
The other concern I mentioned above is computational primitives, by which I mean primitives that are expected to actually do something. For example, you might have primitive #not
, in which case you could write something like the above example as #not(true())
. Here #not
is not built in to the specification of core, but it's provided by the runtime environment. (I'm using a hash to denote primitives, but it's just a convention I think is nice).
With primitives we're now dealing with "core plus", core extended with a set of primitives. So the example #not(true())
is a term in Core(bool()){#not}
. The syntax is complete undecided, but the idea is that this term can be evaluated in any environment that provides the #not
primitive. I think this is really cool. You could easily find the set of primitives your language relies on. It would even be possible to do a translation to a different set of primitives, eg Core(bool()){#not} -> Core(bool()){#nand}
.
Not sure if we're using anything in Core_kernel not in Base. See https://discuss.ocaml.org/t/reducing-the-size-of-js-of-ocaml-output/2538
The three classic pretty-printer papers:
There's an adaptation of Wadler's algorithm to OCaml -- Strictly Pretty -- Christian Linding, 2000.
Wadler's algorithm is presented as a set of combinators
(<>) :: Doc -> Doc -> Doc
nil :: Doc
text :: String -> Doc
line :: Doc
nest :: Int -> Doc -> Doc
group :: Doc -> Doc
(<|>) :: Doc -> Doc -> Doc
flatten :: Doc -> Doc
However, we want to specify layout as part of the concrete syntax declaration. Probably with boxes. Eg:
com :=
| "skip" { skip() }
| [<hv 1,3,0> [<h 1> name ":="] iexp] { assign($1; $2) }
| [<hov 1, 0, 0>
[<h 1> "if" bexp]
[<h 1> "then" com]
[<h 1> "else" com]
] { if($2; $4; $6) }
...
This example is taken with only minor modifications from Syn: a single language for specifying abstract syntax trees, lexical analysis, parsing and pretty-printing -- Richard J Boulton, 1996.
This is also quite similar to Ocaml's Format (they even have almost exactly the same types of boxes):
The Syn declaration seems a little heavy-weight. Ocaml also has break hints. This seems like roughly the right direction -- I'm still weighing the tradeoffs.
Other:
Update standard parser to capture comments and put them in a new provenance type.
I believe it's useful to slightly generalize lexing from a stateless producer of tokens each implicitly containing a string. I'll start with a couple examples for context on why.
Here's an example I took from the React homepage:
class TodoList extends React.Component {
render() {
return (
<ul>
{this.props.items.map(item => (
<li key={item.id}>{item.text}</li>
))}
</ul>
);
}
}
The example shows JavaScript (class TodoList ...
) which contains HTML(-ish) (<ul>...</ul>
), which contains JS (this.props.items...
), which contains HTML (<li>...</li>
), which contains JS (item.id
, item.text
).
JavaScript embeds HTML via tags (<x>...</x>
) and HTML embeds JavaScript via braces ({...}
).
Andy Chu, working on Oil Shell, reached similar conclusions:
What do the characters :- mean in this code?
$ echo "foo:-bar ${foo:-bar} $(( foo > 1 ? 5:- 5 ))"
foo:-bar bar -5
Three different things, depending on the context:
- Literal characters to be printed to
stdout
.- The "if empty or unset" operator within
${}
.- The
:
in the C-style ternary operator, then the unary minus operator for negation.
Andy explains this in more detail on the blog. But the crux is this,
OSH uses a simple lexing technique to recognize the shell's many sublanguages in a single pass. I now call it modal lexing.
This is how we address the language composition problem.
Andy uses slightly different terminology, but I think arrives at essentially the same conclusion as me. I also really like his post When Are Lexer Modes Useful?.
Here's an example denotational semantics rule: [[ x + y ]] = nat-case(x; y; x'. succ([[ x' + y' ]]))
. This rule is translating addition to a natural number recursor. On the right-hand side, [[...]]
signals a transition from the outer (target) language to the inner (source).
Something similar comes up when dealing with typing rules.
ctx >> x => nat, y => nat
-------------------------
ctx >> x + y => nat
This example is similar in that we're mixing tokens from different languages -- >>
, =>
, ,
, and -------------------------
are tokens from the typechecking metalanguage, while x
, y
, and +
are tokens from the object language. I want to have a principled story for how this all works.
Laurence Tratt points out that composition of grammars is hard:
For those using old parsing algorithms such as LR (and LL etc.), there is a more fundamental problem. If one takes two LR-compatible grammars and combines them, the resulting grammar is not guaranteed to be LR-compatible (i.e. an LR parser may not be able to parse using it). Therefore such algorithms are of little use for grammar composition.
He points to Earley as a partial solution to the problem. However, for the purposes I have in mind, it seems a lower-tech solution ought to work. There's usually an obvious sentinel denoting language embedding, something like {...}
or "..."
or [[...]]
. We just need the outer language's lexer to lex it as one big chunk, then hand it to the inner language's lexer to lex the inside. For example:
/\w[\w0-9'_-]+/ -> ID
"(" -> LPAREN
")" -> RPAREN
";" -> SEMI
"." -> DOT
/\[\[(.*)\]\]/ -> MEANING($1)
Here our example from earlier, nat-case(x; y; x'. succ([[ x' + y' ]]))
, will produce a sequence of tokens
ID("nat-case")
LPAREN
ID("x")
SEMI
ID("y")
SEMI
ID("x'")
DOT
ID("succ")
LPAREN
MEANING("x' + y'")
RPAREN
RPAREN
Where the meaning token is further lexed by the inner language lexer, which will produce tokens
ID("x'")
ADD
ID("y'")
This means we never have to confront the general problem of composition of parsers, just the much easier problem of composition of lexers (which is not a technical problem at all, more a question of how to tie everything together). The only drawback is that the transition between outer- and inner-language must be well-enough defined for a lexer to recognize it. In any case seems like a practice that languages designers should really strive for.
There's no need for these, since Fmt includes to_to_string
.
This applies only to the lvca.abstract_syntax_module
ppx. There, term := Operator(list term)
generates:
type 'info term =
| Operator of 'info * 'info List.t
| Term_var of 'info * string
Do we want the Term_var
constructor? Probably, usually. If the language ever binds a term
, then yes:
term := Operator(list term)
scope := Scope(term. term)
But not if there's no scope
. This can be easily statically determined when all definitions live in the same language, but not if term
is imported as an external into another language, however I guess at that point it would be represented as Nominal
(currently).
❯ dune runtest -w
ocamlopt del/.lvca_del.inline-tests/inline_test_runner_lvca_del.exe
ld: warning: directory not found for option '-L/opt/local/lib'
ocamlopt bidirectional/.lvca_bidirectional.inline-tests/inline_test_runner_lvca_bidirectional.exe
ld: warning: directory not found for option '-L/opt/local/lib'
ocamlopt languages/.lvca_languages.inline-tests/inline_test_runner_lvca_languages.exe
ld: warning: directory not found for option '-L/opt/local/lib'
ocamlopt syntax/.lvca_syntax.inline-tests/inline_test_runner_lvca_syntax.exe
ld: warning: directory not found for option '-L/opt/local/lib'
ocamlopt ppx_lvca/.ppx_lvca.inline-tests/inline_test_runner_ppx_lvca.exe
ld: warning: directory not found for option '-L/opt/local/lib'
ocamlopt ppx_lvca_del/.ppx_lvca_del.inline-tests/inline_test_runner_ppx_lvca_del.exe
ld: warning: directory not found for option '-L/opt/local/lib'
ocamlopt syntax_quoter/.lvca_syntax_quoter.inline-tests/inline_test_runner_lvca_syntax_quoter.exe
ld: warning: directory not found for option '-L/opt/local/lib'
Currently, most places will parse a single trailing comment (search for option' comment
). But this will fail in the following case:
term() // comment 1
// comment 2
This will currently fail to parse. Not good.
Tools like Ocamldoc have rules for attaching comments to terms and will only attach one comment.
We should certainly succeed in parsing the above example. I'm not sure yet how we should handle it.
Problems:
Wrapper
. We should be able to ignore thisval language : Lvca_syntax.Abstract_syntax.t
is always required.Ty
module:module Type : sig
include
[%lvca.abstract_syntax_module_sig
{|
sort : *
ty := Sort(sort) | Arrow(ty; ty)
|}
, { sort = "Sort_model.Sort" }]
include Nominal.Convertible.Extended_s with type t := Ty.t
Both Menhir and Ocamllex generated files have embedded paths specific to the computer they were generated on. For example:
# 1 "/Users/joel/code/lvca-bucklescript/src/Term_Lexer.mll"
This is annoying because a bunch of lines change when files are regenerated on my laptop vs desktop. And obviously more of a problem with contributors. The best thing to do would be to not check in generated files, but that might make building LVCA slightly painful.
Just like Haskell supports empty data declarations with EmptyDataDecls
. And Agda / Coq allow the same. I'm strongly leaning towards allowing this. AFAIK this basically just entails updating the term parser and core parser to allow empty declarations and empty pattern matches.
Currently the concrete syntax grammars accept precedence and fixity hints but don't actually use them.
Create new provenance type that summarizes where a value was evaluated.
Classical parts of a language:
Representable
type?)The primitive parser doesn't currently parse int32
s. We should either introduce a syntax for them (?) or document this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.