GithubHelp home page GithubHelp logo

redprl / cooltt Goto Github PK

View Code? Open in Web Editor NEW
213.0 19.0 15.0 3.56 MB

😎TT

Home Page: http://www.redprl.org/

License: Apache License 2.0

Makefile 0.25% OCaml 95.85% Shell 0.06% Emacs Lisp 2.04% Nix 0.39% Vim Script 1.41%
cubical-type-theory proof-assistant type-theory homotopy-type-theory ocaml ocaml-program

cooltt's Introduction

cooltt

A cool implementation of normalization by evaluation (NbE) & elaboration for Cartesian cubical type theory.

For examples, see the test/ directory.

This implementation is forked from blott, the implementation artifact of Implementing a Modal Dependent Type Theory by Gratzer, Sterling, and Birkedal. Code has been incorporated from redtt, implemented by Sterling and Favonia.

A small collection of example programs is contained in the test/ directory. See test/README.md for a brief description of each program's purpose.

Building

cooltt has been built with OCaml 5.0 with opam 2.0.8.

With OPAM

If you are running an older version of OCaml, try executing the following command:

$ opam switch create 5.0.0

Once these dependencies are installed cooltt can be built with the following set of commands.

$ opam update
$ opam pin add -y cooltt .              # first time
$ opam upgrade                          # after packages change

After this, the executable cooltt should be available. The makefile can be used to rebuild the package for small tests. Locally, cooltt is built with dune; running the above commands will also install dune. Once dune is available the executable can be locally changed and run with the following:

$ make upgrade-pins                     # update and upgrade dependencies in active development
$ dune exec cooltt                      # from the `cooltt` top-level directory

With Nix

First, you'll need the Nix package manager, and then you'll need to install or enable flakes.

Then, cooltt can be built with the command

nix build

to put a binary cooltt in result/bin/cooltt. This is good for if you just want to build and play around with cooltt.

If you're working on cooltt, you can enter a development shell with an OCaml compiler, dune, and other tools with

nix develop

and then build as in the with OPAM section above.

Acknowledgments

This research was supported by the Air Force Office of Scientific Research under MURI grants FA9550-15-1-0053, FA9550-19-1-0216, and FA9550-21-1-0009. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any sponsoring institution, government or any other entity.

cooltt's People

Contributors

cangiuli avatar clayrat avatar dependabot[bot] avatar ecavallo avatar ejgallego avatar entropyfails avatar favonia avatar ivoysey avatar jonsterling avatar jozefg avatar mmcqd avatar ralsei avatar solomon-b avatar totbwf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cooltt's Issues

Lightweight unification

I want to see if it is possible to make the nbe equality checker return a list of flex-flex and flex-rigid constraints which would make the equation true. Alternatively, a naive but possibly workable-to-start version would be to raise such a constraint as an exception which carries a continuation, kind of like an algebraic effect.

Universe decoding up to (strict) iso

The current design is that the main type theory of cooltt is a (cubical) logical framework, with no notion of Kan operation or even universe level, etc. Then, "cubical type theory" will be implemented (primitively) as a universe of Kan types in the CuLF, where everything decodes to LF types. The original ideas was that (for instance) the extension type code would decode to a complex LF type involving pi and sub.

Because this decoding is necessarily lossy at the level of type-codes, I think it may be advantageous to have it not hold up to definitional equality, but instead be mediated by a real El(A) connective with intro and elim (and beta and eta) rules that enact one step of standard El decomposition. In other words, it would be a weak universe à la Tarski rather than a strict one. The motivation is to improve the usability of the proof assistant --- the elaborator will take care of applying those El/intro,El/elim rules as needed, and we would gain the advantage of not losing that information immediately. This can be used to, for instance, support having fewer annotations in the source language, and make interactive development easier.

While the motivation for this idea is essentially syntactic and pertains to human usability, it also has mathematical advantages. Weak universes `a la Tarski are much more natural in the mathematics than strict ones, so it seems like this would be a net benefit on all fronts.

Add "veils" for opacity

I would like to build into the elaboration and evaluation environments some kind of mask, which I will call a "veil", to control unfolding of top-level constants during evaluation.

Ideally, a great deal of evaluation can be done with the discrete veil (unfolding nothing); only evaluation that is meant to be shunted to a pattern match should be done with the chaotic veil (unfolding everything).

Consider moving many of the custom closures to 'con'

I believe there is a reasonable way to support moving many of those custom closures into 'con'; this would provide a path forward for treating the arguments to hcom not as primitive binders but as LF terms, and then we would additionally get rid of the use of n-ary binders and n-ary closures, greatly simplifying things.

I'm going to think about this and chart a path forward, but I intend to avoid stepping on #33. There will be some churn, but I will find an appropriate time to carry it out that doesn't disturb ongoing work.

(Update: I just did this inside of #33; none of the raise Todo is affected, so I consider the "stepping on" avoided.)

selectively bind arguments in a multi-lambda?

Not sure if this is a good idea. Sometimes I'm defining a cube and for whatever reason it's more convenient if I flip it along some diagonal first. I could define a flip : (dim -> dim -> A) -> (dim -> dim -> A), but then when I write flip ? I won't have any boundary information in the hole. So maybe it would be useful to be able to write something like this:

def cool : {
  (A : univ)
  (p : dim -> dim -> A)
  -> sub {dim -> dim -> A} #t {\i j => p j i}
} = {
  \A p * j => p j
}

where * means "skip over binding this variable". It should be possible for a tactic like that to propagate boundary info, right?

User-friendly pretty printer

this can be a real rabbit hole but i think these are increasing order of preference for me:

_x₁ : (prf (or (= _x 0) (= _x 1)))
_x₁ : prf (_x = 0) ∨ (_x = 1)
_x₁ : prf (_x = 0 ∨ _x = 1)

Treat binders in hcom/coe as functions rather than closures

I'd like to think of hcom/coe as constants in the logical framework (regardless of whether we end up implementing them this way); so, in particular, it would be appropriate to think of the line and tube things as lambda-bound functions.

This would imply #35.

Get rid of weird 'gtp' stuff

Some initial cost of apparent code duplication (which I strongly believe can still be factored away) is a small price to pay for simplifying the code conceptually and making it more uniform.

don't print vacuous cofibrations

right now, given the input

def formation : {
  (A : dim -> univ) (a : A 0) (b : A 1) -> univ
} = {
  \A a b =>  -- normal lambda form
  path A a ?  -- this is the new form for paths
}

the pretty printer gives the following information for the hole

Emitted hole:        
  A : (-> [_x : dim]  univ)
  a : (el (A 0))
  b : (el (A 1))
  _x : dim
  _x₁ : (prf (or (= _x 0) (= _x 1)))
  _x₂ : (prf (= _x 1))
  |- ? : (el (A _x)) [_x₃ : (= _x 0) => []]

This is correct but somewhat misleading. The constraint [_x₃ : (= _x 0) => []] is satisfied vacuously: _x = 1 is in the context and equality of dimensions is decidable, we know that _x cannot ever be zero. Therefore any el (A _x) will do because they all satisfy that constraint definitionally, which is to say any el (A 1) will do, including the one in the context b.

A better output might look like

Emitted hole:        
  A : (-> [_x : dim]  univ)
  a : (el (A 0))
  b : (el (A 1))
  _x : dim
  _x₁ : (prf (or (= _x 0) (= _x 1)))
  _x₂ : (prf (= _x 1))
  |- ? : (el (A _x))

An even better output might even look like

Emitted hole:        
  A : (-> [_x : dim]  univ)
  a : (el (A 0))
  b : (el (A 1))
  _x : dim
  _x₁ : (prf (or (= _x 0) (= _x 1)))
  _x₂ : (prf (= _x 1))
  |- ? : (el (A 1))

but that's a different issue.

Add coercions to core language

The simplest example of coercions will be for universe levels. Later on, I wish to experiment with weak substitutions, ultimately deleting the type equality judgment entirely (as an experiment).

Add types for the "fiber of El at A under \phi"

This will be needed to satisfactorily implement #26 / #25.

In particular, what I want is to be able to write something like a : Univ such that under phi, el(a) = A. This is not currently expressible as a cubical subtype of Univ, but it is semantically sensible and maybe even operationalizable.

Substitution up to isomorphism

A place to discuss.

So, the only potential way I see of getting owned (usability-wise) by having substitutions only up to isomorphism would be in connection with types that don't have eta rules, specifically the eliminators of inductive types.

Here, you could get into all sorts of nasty situations where you have a coercion around the eliminator vs inside it, etc., and it is painful to mediate between them. So, the question should be whether this is something that can be overcome, using some very clever approach, or if it is something fatal to the idea.

N-ary split tactics

[phi -> M | ... | psi -> N]

This would correspond nicely to the use of n-ary joins in the cofibrations. With n-ary splits, we would have a single elimination form for joins, removing the need for nesting as well as a separate abort.

Remember names of bound variables

This purpose of this ticket is to make outputted code mode readable.

This can be done in one of two ways: remember the name at the binding site, or remember the name at the variable site. These both have advantages and disadvantages:

  1. If we remember the name at the binding site, then we can ensure a correct mapping in pretty printed output between binders and variables --- that is, printing-shadowing bugs are impossible. But the disadvantage is that printing requires a local environment, and there is no way to print some term without that environment (hence the use of these dump functions for debugging).

  2. If we remember the name at the variable site, then printing becomes very easy; but we may also somehow introduce confusing shadowing issues in the printed output, depending on various factors.

Grammar paper-cuts

I'd like to open this ticket to track some papercuts in the grammar caused by me not knowing how to write parsers lol.

  • Need to write {A b c} -> D e f instead of A b c -> D e f
  • Need to write {i==0}\/{i==1} instead of i==0 \/ i==1

Source locations

OK, my proposal for getting started is as follows:

  1. The ElabEnv should have a "current span" field --- the invariant is that as you get deeper into an elaboration problem the span narrows. It is the responsibility of the elaborator to set this span correctly. This information can then be used when emitting messages or errors.

  2. I guess we should annotate the concrete syntax in such a way that the elaborator has this information. One way is to add dummy nodes in the concrete syntax that annotate with source information, and then the elaborator executes these by locally updating the ElabEnv.

A flat modality for the LF

To implement a number of things properly, it will be useful and/or necessary to have a flat modality; it is OK if it is restricted to occur as an argument to a pi type, which often simplifies matters --- these kind of types are called "virtual" in cooltt, and include dim,cof as well.

maybe factor out some commonly built terms

      let* piuniv =
        lift_cmp @@
        quasiquote_tp @@
        QQ.foreign_tp univ @@ fun univ ->
        QQ.term @@
        TB.pi TB.tp_dim @@ fun i ->
        univ

appears once in Nbe and twice in Refiner;

      let* bdry_tp =
        lift_cmp @@
        quasiquote_tp @@
        QQ.foreign_tp univ @@ fun univ ->
        QQ.foreign fam @@ fun fam ->
        QQ.term @@
        TB.pi TB.tp_dim @@ fun i ->
        TB.pi (TB.tp_prf (TB.boundary i)) @@ fun prf ->
        TB.el @@ TB.ap fam [i]

appears more or less verbatim in Nbe and the Refiner. It's not clear that this is enough code duplication for the overhead of making a library of combinators for building commonly-built terms (possibly as an extension to TermBuilder) worthwhile or just an obfuscation.

Let the user define an Open Modality

I had a nice idea for a cool use of the cofibration machinery. If the user can declare a fresh cofibration, then this is basically enough to define a strict "open modality". Then we could potentially use cooltt to mechanize some neat artin gluing arguments.

Conversion under complex cofibration

When something like phi \/ psi is already in the context, conversion doesn't really do the right thing --- it tries converting uniformly under the assumption, but it really needs to split on it ASAP. (It does do the right thing when adding new assumptions to the context; this is just about existing assumptions.)

This can be fixed as follows: before doing equality checks in the elaborator, first go through the context and bundle up all the assumed cofibrations and then do under_cofs (meet phis) (equate ...).

Add "Cone of Silence" tactic

We will need a special form like << ... >> or something that does the following: it temporarily ignores the required boundary and just elaborates something at the requested type. The result is that it leaves a hole, but lets you interactively develop on the inside.

This is for cases where you have partly written some code but it doesn't have the right boundary yet.

This would roughly be a chk_tac -> bchk_tac.

Elaboration of coe

This first step to be able to do this is to write down the refinement rule in math. I want to support the bchk_tac in a stronger way than redtt did.

Replace core typechecker with "replay" elaborator

The idea is that this would be an elaborator from raw core-language terms, calling the primitive tactics from the refiner.

The result would be that the real typechecking code is always shared, and trusted in the refiner.

Combine check-env with elab-env

Ultimately, I would like to de-emphasize the typechecker; it is frankly not something that should be being called by the elaborator on a regular basis anyway. The invariant of the elaborator is that it produces well-typed terms, and in most cases, it is not necessary to "guard" this with a call. So, the elaborator environment should be the primary thing.

trouble with let

def simple : {
  (A : univ) (a : A) -> A
} = {
  \A a =>
  let b : A = a in
  b
}

normalize simple

raises Internal error (Failed to normalize): do_sub_out.

I failed to track down the reason. (The same thing happens on the pathcoe or fhcom branches btw.)

Grammar 🧞 in service

Following #44, let's accumulate more items here. Don't close this issue immediately if you fix the first problem. 😄

  • Need to write k=0 ∨ {∂ i} instead of k=0 ∨ ∂ i (done by #70 and #63)
  • Allow i=0 ∨ j=0 ∨ k=0 (done by #73)
  • Omit the lambda symbol (done by #92)
  • Allow n-ary tuples (but then we have to decide what [ a => b ] means)
  • Drop the parens around type annotations, and change the operator to : (#94 and then partially reverted by #105)
  • fst x y should be parsed as fst x y. So do snd, vproj, and cap.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.