GithubHelp home page GithubHelp logo

redprl / algaett Goto Github PK

View Code? Open in Web Editor NEW
30.0 7.0 0.0 635 KB

🦠 An experimental elaborator for dependent type theory using effects and handlers

License: Apache License 2.0

OCaml 99.52% Shell 0.18% Makefile 0.30%
algebraic-effects normalization-by-evaluation ocaml ocaml-program type-theory proof-assistant

algaett's Introduction

🦠 algaett’s not algaeff

This development is an experiment with the following goals:

  1. Adopt smalltt and related techniques into the cubical world.
  2. Show how various OCaml packages of ours fit together.
  3. Write natural grammars without neccesarily conforming to LR(k).
  4. Use lots of Unicode emojis.

Try It Out!

opam pin git+https://github.com/RedPRL/bantorra
opam pin git+https://github.com/RedPRL/algaett
cat tests/example.ag
algaett tests/example.ag

The last line should not have an output, which means it type checks!

Important Ideas

Ideas from Smalltt

The core NbE algorithm closely follows AndrΓ‘s KovΓ‘cs’s smalltt. Here are some notable differences:

  1. We intentionally do not implement unification.
  2. The universe itself (as a term) is not inferable, which means that the checking might have to be redone with the type unfolded.
    πŸ“Œ πŸ˜„ : 🌌 πŸ†™ 2️⃣ πŸ‘‰ 🌌 πŸ†™ 1️⃣
    πŸ“Œ _ πŸ‘‰ 🌌 : πŸ˜„
    
    The type inference from the universe 🌌 will fail, and then the type checking will be redone with πŸ˜„ unfolded to 🌌 πŸ†™ 1️⃣.
  3. The conversion checker is generalized to handle subtyping generated by cumulativity.

Modular Development

  • algaeff: reusable effects-based components
  • asai: error messages (not actively used yet)
  • bantorra: unit resolution (not actively used yet)
  • bwd: backward lists
  • mugen: universe levels
  • yuujinchou: namespaces and name modifier

Parser beyond LR

We are using the Earley’s parsing algorithm which can handle all context-free grammars.

Documentation

Here is the API documentation.

algaett's People

Contributors

dependabot[bot] avatar favonia avatar jonsterling avatar mmcqd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

algaett's Issues

Update install instructions

Our installation guide is broken now. Two issues:

  1. That this should install it: opam pin git+https://github.com/RedPRL/algaett. This is not mentioned.
  2. However, one has to pin asai first (or we should publish a version now). Actually, the CI scripts failed because of this.

So maybe this is what we need in README?

opam pin git+https://github.com/RedPRL/asai
opam pin git+https://github.com/RedPRL/algaett

Remove unrecoverable errors from the interface

@mmcqd @jonsterling Currently, each top-level module (directories) has its own error type, and that's probably leading to lots of boilerplate with little benefit. Proposals:

  1. Keep the current principle
  2. Centralize all errors into one place

I wonder if there are any other counter proposals, no matter how stupid they might sound?

Holes don't work

Try the following code:

def _ : univ := ?

The result is some error:

dune exec algaett test.ag
Fatal error: exception Invalid_argument("app_ulvl")
Raised at Stdlib__Effect.Deep.try_with.(fun) in file "effect.ml", line 69, characters 47-54
Called from Loader.load in file "src/loader/Loader.ml", line 17, characters 21-78
Called from Dune__exe__Main in file "src/bin/Main.ml", line 3, characters 9-41

I never ran the code that builds holes, and this code is itself very subtle (in part b/c of the handling of universe polymorphism, which we are hitting here). So this should be debugged.

Add "goal purposes" to the refinement rules

I would like to add a new argument to the checking rules like:

module Purpose : sig 
  type t
  val unknown : t
  val head : Yuujinchou.Trie.path -> t
  val app : t -> cell -> t 
  val fst : t -> t 
  val snd : t -> t
end

Then in the refiner, we update the checking rules to plumb these purposes around. Here are a few representative examples, exercising each kind of purpose:

let lam ~name ~cbnd : R.check =
  R.Check.rule @@ fun ~tp ~purpose ->
  match tp with
  | D.Pi (base, fam) | D.VirPi (base, fam) ->
    RefineEffect.bind ~name ~tp:base @@ fun arg ->
    let fib = NbE.inst_clo' fam @@ Hyp.tm arg in
    S.lam @@ R.Check.run ~tp:fib ~purpose:(Purpose.app purpose arg) @@ cbnd arg
  | _ ->
    invalid_arg "lam"

let pair ~cfst ~csnd : R.check =
  R.Check.rule @@ fun ~tp ~purpose ->
  match tp with
  | D.Sigma (base, fam) ->
    let tm1 = R.Check.run ~tp:base ~purpose:(Purpose.fst purpose) cfst in
    let tp2 = NbE.inst_clo fam @@ RefineEffect.lazy_eval tm1 in
    let tm2 = R.Check.run ~tp:tp2 ~purpose:(Purpose.snd purpose) csnd in
    S.pair tm1 tm2
  | _ ->
    invalid_arg "pair"

  let app ~itm ~ctm : R.infer =
    R.Infer.rule @@ fun ~purpose:_ ->
    let fn, fn_tp = R.Infer.run ~purpose:Purpose.unknown itm in
    match NbE.force_all fn_tp with
    | D.Pi (base, fam) | D.VirPi (base, fam) ->
      let arg = R.Check.run ~tp:base ~purpose:Purpose.unknown ctm in
      let fib = NbE.inst_clo fam @@ RefineEffect.lazy_eval arg in
      S.app fn arg, fib
    | _ ->
      invalid_arg "app"

Then we adapt the top-level checking thing to take the identifier that is being implemented:

let check_top tm ~tp ~path =
  RefineEffect.trap @@ fun () ->
  S.lam @@
  RefineEffect.with_top_env @@ fun () ->
  Rule.Check.run ~tp:(NbE.app_ulvl ~tp ~ulvl:(RefineEffect.blessed_ulvl ())) ~purpose:(Purpose.head path) @@ check tm

let infer_top tm ~path =
  RefineEffect.trap @@ fun () ->
  let tm, tp =
    RefineEffect.with_top_env @@ fun () ->
    let tm, tp = Rule.Infer.run ~purpose:(Purpose.head path) @@ infer tm in tm, RefineEffect.quote tp
  in
  S.lam tm, NbE.eval_top (S.vir_pi S.tp_ulvl tp)

The goal of this feature is to support (1) better error messages, and (2) eliminators that do not forget their origin --- so that we can print them like f x y rather than nat-rec {asdfkjhadslkfjhalsdkjfhasjdfha lksdf alkjhdf lakjshdf ljkad flkjh asdflkjh adslfkjh asdlkjfh asdlfjh asldkjfh alskjdf halkjsdhf lakjd flakjs dflkajsdf lkjahsf jalkjh} x y.

A secondary goal is to integrate with McBride-style "labeled types", but I want to design that feature only after getting this in.

Options for conversion/normalization

  • --allows-empty-system: no ghcom etc
  • --fhcom-kan=?: try different implementations of fcom and V types.
  • --paranoid: normalize everything extremely eagarly
  • --favonia: trolling mode

@mortberg What would be on your list to try out?

What is missing from the current name printing?

  1. Sometimes Name.pp is used and sometimes Pp.Env and Name.to_string are used. Needs to be consistent.
  2. We are still shadowing global names in the signature (including constructors, data types, etc).
  3. Handling of variables which already have suffixes is suboptimal.
  4. Unnamed variables should try x,y,z,w for expression variables and i, j, k for dimension variables.

Fix Unfold, Axiom, and Def in the presence of shadowing

This is a note that the current conversion checker is incorrect in the presence of shadowing, because the constructs Domain.Unfold, Domain.Axiom, Syntax.Axiom, Syntax.Def only remember the top-level names provided by the user. The checker could potentially mistake heads with the same top-level names as judgmentally equal, depending on whether unfolding is forced. A correct implementation should instead use a more globally unique identifier, such as a pair of a file path and its index in that unit (e.g., "/cool/lib.ag", 10). I would like to address this issue only after rewriting the bantorra library (a library for unit path resolution).

πŸ₯‘ [Meta] Turn components into reusable packages

To speed up the development of the next generation of cooltt, I propose turning isolated components into more OCaml packages with anime-inspired names. Here are some ideas:

As a general rule of thumb, perhaps new features should strive to become standalone libraries.

Precise Unicode emoji parsing is too slow

Currently the parser precisely recognizes the usual ASCII stuff, all Unicode emojis, and a few Unicode symbols. The precise grammar for emojis is unfortunately too large for the Earley library we use---even a small program can take seconds to parse. It seems we have to give up the precise grammar for Unicode emojis. 😭

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.