GithubHelp home page GithubHelp logo

hackworthltd / primer Goto Github PK

View Code? Open in Web Editor NEW
11.0 3.0 1.0 6.01 MB

A pedagogical functional programming language.

License: GNU Affero General Public License v3.0

Makefile 0.13% Nix 3.19% Haskell 95.61% Dhall 0.04% PLpgSQL 0.36% Shell 0.66%
primer functional-programming education programming programming-language

primer's People

Contributors

brprice avatar dependabot[bot] avatar dhess avatar georgefst avatar github-actions[bot] avatar github-merge-queue[bot] avatar mergify[bot] avatar patrickaldis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

jwhessjr

primer's Issues

"Add an argument" analogues for other bunched things

We should consider adding a sidebar action and a canvas '+' button for lambdas in grouped mode. Probably some other things for consistency - foralls, applications etc?

This should be fairly straightforward once we figure out what constructs need it.

Should we milestone this for the demo?

More property test iterations on CI

Once we've finished with the follow-ups to hackworthltd/vonnegut#765 which convert our unit tests to use tasty-hunit (#801, hackworthltd/vonnegut#802 etc.), we could pass e.g. --hedgehog-tests 100000 to the test executable to run more iterations.

This wouldn't be a good idea before then, since the CLI flag overrides the withTests 1 call in the source code, so we'd end up re-running our unit tests as well. Which is obviously a waste of resources.

This might not be something worth running on every PR. But rather, it could be, say, a weekly background check.

It's likely to be particularly useful after hackworthltd/vonnegut#779, since the generators there seem more likely to have rare corner cases.

Improve genAPP and do EvalFull tests in extended context

When doing hackworthltd/vonnegut#800, I found there is problem with doing this it EvalFull: our term generator are rather discard-happy for the APP case. This is ok when just generating 1 term, but when evaluating in a context with a bunch of globals, we get a multiplicative knockdown in productivity. I have an idea how to improve it, but am spinning that off into another PR.

The idea is to improve how we generate t @T ∈ S (for given S)
Currently we generate t ∈ ∀a._, and hope that there is some T that makes the types work out. This is fairly rare!
Instead, we could generate versions of S with some subtrees replaced with a (pick any one will be fine, but also chances for zero, and the whole tree, and multiple identical subtrees), and generate t ∈ ∀a.S'. This will (I think) go much better, because we can always just generate t=? : ∀a.S.

[I tried to change the APP case to try 100 times and then fall back to generating an empty hole, but this lead to extremely slow generators: I assume because there is a decent chance of generating 3 (say) nested APPs. Previously this would retry 100 times and then give up, but now it would try 100 times and fall back to a hole nested for 100^3 times!]

"(Smart) Apply a function to this argument" action

Imagine we have this scenario:

image

If we had a slight twist on the action implemented by hackworthltd/vonnegut#815, described as, say, "Apply a function to this argument," then we could select f in the above expression for testList, apply map to it, and arrive at this in one step:

image

This would be very useful!

In cases of multi-ary functions like map, we could just apply the selected argument at the first argument position, or, even better, the first argument position where it fits the type signature of the function you're applying to it.

Relevant comments from the hackworthltd/vonnegut#815 discussion:

https://github.com/hackworthltd/vonnegut/pull/815#issuecomment-905540391
https://github.com/hackworthltd/vonnegut/pull/815#issuecomment-905573472

Bring backend into spec with Primer 1.0

(This is a tracking issue.)

First we need to decide on Primer 1.0's semantics and write down some operational semantics, in order to provide a written specification. Then we need to update the backend so that it's a valid implementation of that specification, or as close as we can feasibly get.

Improve 'raise' to preserve more type information (and play nicely with smart holes)

This is a placeholder for now. I should come back and fill this in.
There was some discussion on hackworth.dev on 1st july, and there is some discussion in the commit "some notes on being careful about syn/chk " 1d2c8d1f791aa7f19a8e8d20876db7f53a01deaa (I will copy this into a comment below, so it does not get lost during a rebase)

Description

This is a feature request.

<Include a brief description here.>

Dependencies

<Link to other GitHub Issues or PRs that must be completed before work can begin on this feature.>

Spec

<Describe what the feature will do, from a high-level perspective.>

Implementation details

<If there are any particularly tricky implementation bits that are worth discussing or you haven't quite figured out yet, describe the details here.>

Not in spec

<Describe any features that your implementation is explicitly avoiding, that a reasonable person might think should be in spec. For example, if you're adding a new action that operates on variables, but it only works with term variables and not type variables, you might want to mention it here so that the scope of the feature is clear.>

Discussion

<If there's a GitHub Discussions topic where this feature has been/is being discussed, link it here.>

Future work

<Describe ideas or additional features that might be useful once this feature has been implemented. This is a good place to link to other GitHub Issues or PRs that track this future work.>

Insert nodes above a subtree

When trying to do an insert action with multiple children (e.g. application) on not-a-hole, there are multiple things one could mean (one per child of the inserted node) e.g.:

  • t ~> t $ ?
  • t ~> ? $ t
    it is not intuitive which of these will happen, and you have no option if you happen to want the non-implemented one.

[This happens with more than just application, but I'll restrict to that case for terminological reasons]

It would be good to make it clear which will happen, and maybe even give an option.
This could be done

  • badly: with two action buttons apply this to a new argument, apply a new function to this. This seems a bit rubbish - why have to separate buttons for things which are so clearly related.
  • pictorially: instead of having verbose explanations, have pictograms (obviously should do it better than this ascii art, but I hope it gets the idea across):
      .         .
     / \       / \
    *   ?     ?   *
    
  • combinedly: instead of having two buttons, can we have one button with some options.
    • Perhaps split in half where you can only click on the "options" and not the "header" [this could obviously be combined with the pictograms]
      Insert an application
        keep this as fn  |  keep this as arg
      
    • Or perhaps a fancy interactive pictogram (I don't see how to do this with text) where we render the button
      Insert an application
        .
       / \
      o   o
      
      and the click targets are the two o. Hovering over one will convert the pictogram into the form above with a * where you hover (where the current tree will go) and a ? where the new hole will go

Infinite loop in typechecker

https://github.com/hackworthltd/vonnegut/blob/cdfeeb110ba5e5d7da63b7a701667a93b7ebc006/backend/src/Vonnegut/Core.hs#L132

We have a core constructor with no typechecking rules: LetType!
Unfortunately, since we have a catch-all case in synth, we don't get the compiler yelling at us.
Even worse, this catch-all is supposed to catch checkable-only things, and it adds an annotation and tries again.
This means that synthesising the type of a LetType will always loop (with SmartHoles on)!

Thankfully, this hasn't bitten us yet since they only show up inside evaluation (the user cannot construct them manually), and we do no typechecking of the output of the evaluator.

There is an easy proximal fix: add TC rules (to both synth and check: follow Let/LetRec). This will also need a choice on exactly where the bound variable is in scope.

The lets-avoid-similar-incidents-in-the-future fix is less obvious to me: I'd really like OCaml-style multi-matches, so we can change that catch-all into catch lam/LAM/case/...(?) without it being really awkward. Does anyone have a good idiom here? I guess we could factor out the code into a ... where default = ... and just do

synth e@Lam{} = default e
synth e@LAM{} = default e
synth e@Case{} = default e
...

but that is not very satisfying.

Remote exploit in `aeson`

Details: https://cs-syd.eu/posts/2021-09-11-json-vulnerability?source=reddit

Tracked upstream here: haskell/aeson#864

Reddit thread: https://www.reddit.com/r/haskell/comments/pm7rcr/cs_syd_json_vulnerability_in_haskells_aeson/

The vulnerability is via HashMap from https://hackage.haskell.org/package/unordered-containers, which we do not use (directly). We should consider banning the use of unordered-containers, and hashable (https://hackage.haskell.org/package/hashable) until this issue is addressed — if it ever is.

(It's not clear to me whether the maintainers mentioned in the disclosure post, who apparently have known about this issue for ~ 1 year, are the aeson maintainers, or the unordered-containers maintainers, or the hashable maintainers, upon which unordered-containers depends.)

Blocked on:

Should the `f $ ?` action replace the bare `$` action for empty holes?

Once hackworthltd/vonnegut#738 is merged, I think there's a case to be made that the new f $ ? saturated function action should replace the bare $ action for empty holes. The two only cases where I can think of bare $ being what you want is when you haven't written the function yet that you want to use in the hole, or if your function is relatively simple (e.g., no foralls), you want to partially apply it, and raising it will be more hassle than just doing the applications yourself. However, I hope that this latter case will eventually be subsumed by better inference where the action can tell how many applications to use based on the type of the hole, and/or by the "phantom application" approach as described in #601.

At expert level , I suppose there's no downside to presenting both actions. (In the long term, we could measure usage and eliminate it if f $ ? ends up being used much more often.)

However, at beginner level, I prefer having fewer choices in order to reduce the cognitive load, so long as the choices presented are sufficient for a beginner to write any plausible beginner program.

We should still offer $ for non-holes, of course. For example, if the cursor is on some function variable f, you may want to $ apply it.

A mode to emulate a poor internet connection, for local testing

We should consider (especially in light of our plans to make the new frontend completely dumb) to have a mode that introduces extra lag when responding to requests. This should emulate the experience of interacting with vonnegut over the internet rather than locally on a dev/testing machine and give us a better idea whether we can really offload even interactive things to the backend.

This may well be possible by running behind some sort of proxy, and is a fairly common tool. Drew mentioned that it is an option in Apple's workflow for developing iPhone/iPad apps, Firefox has an option to throttle internet connection in its dev tools, I know linux software raid dmsetup has an similar option for simulating failing HDDs. Hopefully there is at least a decent body of knowledge we can draw on, if not an off-the-shelf solution.

"Pushing down" let bindings

Discussed in https://github.com/hackworthltd/vonnegut/discussions/638

Originally posted by dhess June 30, 2021
Generally speaking, I really like the reduction rule in our evaluation semantics that converts applications to let bindings, as this rule obviates the need for environments (which are not first class in any programming language I'm aware of). However, in Keybase chat today, I pointed out the following limitation of this rule. Consider the following evaluation step in the application of map to a list:

Screen Shot 2021-06-29 at 11 06 53 AM

Now let's evaluate the APP redex such that we get a lettype β = Nat in ... in its place:

Screen Shot 2021-06-29 at 11 08 12 AM

The problem here is that I have to reduce the lettype β = Nat in ... for all occurrences of β in the subtree under the lettype before I can perform the function application (λf. ...) (λy.Zero). So if I want to mimic the call-by-name evaluation strategy, I can't. I can effectively only do call-by-value.

(Here I use our slightly odd lettype form, but the same goes for lets for term bindings.)

I proposed that, instead of the current rule which replaces the application with a single lettype at the application location in the tree, we push the lettype β down to each occurrence of β in the subtree, which would allow the student to explore different evaluation strategies.

@georgefst pointed out that in Harry's original "Evaluation steps in Vonnegut" document in Craft (the Apple URI scheme link to the relevant section of that document is craftdocs://open?blockId=CECD7E89-232B-4F44-9A6F-53DF188B9212&spaceId=8b43f204-aeeb-b8ce-45cc-73a653745299), under the "Making substitution incremental" heading, the major rationale for this reduction rule was to not change too much in the tree at once, making application easier to follow; or, as @georgefst put it, "it keeps the evaluation step small and local." By pushing the let or lettype down to each occurrence of the bound variable, we would be harming this nice pedagogical affordance.

@brprice then had the really interesting thought that by providing another reduction rule that "floats lets downwards" towards their occurrences, and changing the rule that allows lets to be eliminated once there are no more uses in their subtree to something more like, "a let can be eliminated once it's adjacent to its bound variable's use" (or similar), the student could explore these ideas themselves, all while following our small-step principles of evaluation.

Obviously after some deliberate practice and eventual understanding of this small-step let elimination, we'd probably want to change the rules to something more like what we have now, to keep this from being too tedious. (@georgefst raised the possibility of allowing the student to drag the let down the tree to its occurrence, to match it up and eliminate it in one gesture.) We could even eventually remove the "function-application-as-let-binding" rule and simply do substitution in one go.

Some tests discard most samples

E.g. https://github.com/hackworthltd/vonnegut/pull/779#discussion_r692359627, https://github.com/hackworthltd/vonnegut/pull/779#discussion_r692315167

We are testing either events that "should be rare", or events that the generator finds hard to sample from, and so have to crank up the allowed number of discards. We should perhaps improve the generator, or make a more targeted generator? Since CI times are only impacted by a few seconds, we have currently commited the tests.

More global language extensions?

Right now, hackworthltd/vonnegut#765 is failing because of some HLint warnings about unused extension pragmas. I think these can be useful for some extensions that you really don't want to enable globally (CPP, TemplateHaskell, maybe UndecidableInstances...), so I wouldn't want to disable them, but for others it's just busywork.

It's particularly annoying as they're not reported by HLS for some reason (haskell/haskell-language-server#2042). Otherwise I would have caught the issue before committing.

Anyway, shall we just expand the default-extensions list in our cabal files? Currently we have the following enabled via pragmas, and I don't see any harm in enabling them everywhere:

  • ConstraintKinds
  • DeriveFunctor
  • DuplicateRecordFields
  • ExistentialQuantification
  • FunctionalDependencies
  • GADTs
  • NamedFieldPuns
  • OverloadedLabels
  • OverloadedStrings
  • PolyKinds
  • TupleSections
  • TypeApplications
  • TypeOperators

One annoying thing is that we currently have five identical default-extensions lists, for our various components. We could use common stanzas in the cabal files to bring this down to three, but it's still less than ideal. (Roll on GHC2021...)

"Add an argument" action for function application

Whatever we do for hackworthltd/vonnegut#308, we should consider mirror for function application. The problem isn't as dire for application, as we already have a working implementation (modulo cursor movement) where the user can highlight the root App node and press the $ action button as many times as they need arguments, but the UX isn't particularly clear. By adding a + button inline and an explicit "Add an argument" action button, as is currently planned in hackworthltd/vonnegut#308, we could probably improve things.

Remove more holes automatically

We don't (even after hackworthltd/vonnegut#175) remove holes such as
{? Succ ?} Zero
because we don't have enough information to see if the hole is "finishable" at the point we handle the hole. In general, if a hole is in a synthesis position, we don't know where else downstream it is being referred to, so we dare not change its type.

I don't know how to do better here!

I also don't know how common this situation is in practice, so have no idea if this should be high priority or not.

UX improvements to cursor location

I think there's some small changes to the cursor location that will improve the user experience a bit:

  • When inserting an arrow type (), leave the cursor on the LHS of the arrow. This makes it quick to flesh out the arrow type further.
  • When applying a function (i.e. inserting an application), leave the cursor on the RHS of the application. In my experience it's common to go from f to f ?, and the next thing you want to do is put something in the ?.

What reduction rules should we have for annotations?

We have a unit test for evaluation which reads letrec x:Bool=x in x takes 100 steps (in a synthesisable context) and times out giving (letrec x:Bool=x in (x:Bool)):Bool. This has two annotations one may wish to remove. Can/should we do better here? (NB: it is not entirely obvious that we are not doing better already, as this is an arbitrary timeout, not a normal form, so potentially we remove one on the next step. However this does not happen.)

The reduction sequence is, writing [_] for the elided embeddings of synthesisable terms into checkable terms (this sequence would be a bit shorter if we reduced letrec x:T=t in C(x) to letrec x:T=t in C((letrec x:T=t in t):T), rather than letrec x:T=t in C(letrec x:T=t in (t:T)). Maybe this would be worth doing regardless of any other decision)

  • Start: letrec x:Bool=[x] in x
  • Inline the letrec, adding an annotation to ensure it is synthesisable: letrec x:Bool=[x] in (letrec x:Bool=[x] in ([x] : Bool))
  • Remove the head letrec: letrec x:Bool=[x] in ([x] : Bool)
  • inline and remove the letrec: [letrec x:Bool=[x] in ([x] : Bool)] : Bool
  • again inline and remove the letrec: [[letrec x:Bool=[x] in ([x]:Bool)] : Bool] : Bool
  • upsilon-contract: [letrec x:Bool=[x] in ([x]:Bool)] : Bool
  • Now we fall into a loop

There are two annotations that maybe we should remove somehow.

  • If we had a rule [e]:T ~> e, that would work, but I don't know whether this is a sane rule. I'm worried about confluence in particular. I have seen McBride talk about the upsilon rule we have many times, but very rarely about this one, indicating that it is probably problematic. This is the only way that I can see the outer annotation disappearing.
  • The inner one is inside an embedding, but the upsilon reduction is blocked by the letrec. We could either "push the letrec inside the annotation" hackworthltd/vonnegut#771, or change our reduction rule of inlining letrec to put the letrec inside the annotation as suggested above. Both of these are pretty innocuous and may be a nice, easy, improvement.

We should be able to inject `let`s into expressions

Consider this expression:

Screen Shot 2021-08-17 at 11 26 33 PM

I want to insert a let above the match x with node such that the expression reads:

λp. let x = ? in match p with ...

but we don't support this at the moment. I think @brprice has mentioned wanting something similar previously, and possibly wanted to support putting the target expression on the left-hand side of the let, as well?

CI check number of tests

One thing I am paranoid about is messing up the renaming of tests and failing to auto-detect them. One way to calculated the number is by cabal run vonnegut-test -- -l | wc -l.
I wonder if we can add something similar to CI that will warn us if the number of tests goes down - probably indicates auto-detection has been confused? (Maybe just a bot that can comment what the change in the number of tests, and later in the test coverage is)

Originally posted by @brprice in https://github.com/hackworthltd/vonnegut/issues/801#issuecomment-902042693

Metadata in the evaluator

We should ensure that (both) evaluator(s) do sane things to the metadata, if we want to allow the user to interact with a reduced tree. (i.e. show the type of nodes - this relies on metadata).

One option is to run a full TC pass after each step, but this is a rather sledgehammer approach

New policy: write valid Haddocks for all new Haskell code

It's time we start writing proper Haddocks for all new Haskell code. I propose that we have a flag day sometime soon and start requiring them for all new code that warrants them.

Also, I propose that we adopt a policy of writing proper Haddocks whenever we make a change to an existing top-level definition, so that over time, we can retroactively add them to existing code.

Any objections?

Expose "insert saturated thing" at the type level

We have it on the term level thanks to hackworthltd/vonnegut#738 and hackworthltd/vonnegut#743, but not at the type level.

  • For type vars (though currently we cannot make a higher-kinded var)
  • For type formers (e.g. insert List ?)

Note that we can't (easily) do the equivalent of hackworthltd/vonnegut#712, as we only do synthesis for types, though we could with a bit more work.

"Pushing down" let bindings

Our evaluator should not eliminate lets until they're "pushed down" to their use sites. This would permit more flexible interactive evaluation strategies in eval mode (see discussion in https://github.com/hackworthltd/vonnegut/discussions/638). It might also potentially simplify the most complicated bit of the current evaluator — see https://github.com/hackworthltd/vonnegut/pull/768#discussion_r678488543.

In order to preserve the "small local changes" spirit of our reduction rules in the current eval mode, we should probably implement this as @brprice suggested (and I documented in https://github.com/hackworthltd/vonnegut/discussions/638), such that lets float down towards their occurrences. @georgefst also suggested that we could add an affordance to allow students to click-drag a let to each of its use sites before eliminating it, which seems like a good idea, though it should probably be part of a suite of similar interactions, rather than just a special case interaction for eliminating lets.

But let's start simple. I propose that we change the current let elimination rule so that a let can only be eliminated once it's adjacent to the use of the bound variable, and that we add a "push down" rule that moves a let one step closer to its use(s), splitting the let into multiple equivalent lets when more than one child contains a use . If the latter is too annoying in practice, we can try some alternate approaches, such as @georgefst's suggestion, or just a macro step that pushes a let all the way down to all of its bound variable's occurrences in one go.

(One special case that occurs to me as I write this: Imagine we have const x y = x and we evaluate const 3 2 giving let x = 3, y = 2 in x — we need to be able recognize and step-eliminate the let y = 2 in this case.)

See #28 for an example of why this feature would be useful.

Should we allow infinite reduction of terms bound in letrecs?

The current evaluator allows you do to this:

    letrec f = λx. f x in f Zero
==> letrec f = λx. (λx. f x) x in f Zero
==> letrec f = λx. f x in f Zero
==> letrec f = λx. (λx. f x) x in f Zero
==> letrec f = λx. f x in f Zero
...

This could be an educational example of unbounded recursion, or it could be annoying.

Post-hoc smart applications

(Originally posted here: https://github.com/hackworthltd/vonnegut/pull/815#issuecomment-904778746)

With hackworthltd/vonnegut#815, we can go from this:

image

to this, in just one step, by using the saturated application action using map:

image

It would be useful if, when inserting a value into a hole, we check whether it's part of an application spine and, if so, apply this action again to see if we can refine the type further. For example, assume I have some f : Nat -> Bool and I insert it into the HoF hole of map; then it would be great if we could automatically get this in just one step ("Use a variable" action with f):

image

Add support for integers

In one of the expert user testing sessions, the user suggests it would be nice to have integers and arithmetic operators (+,-,x, /) to write and evaluate programs.

Do we want to support those? I have spoken to Drew, and he agreed that it would be nice and presumably feasible to support integers. How about arithmetic operators?

Scaffolding: build lambda expressions automatically from type

I've been pushing for awhile for some editor smarts that would automatically build a function's expression lambdas as the student creates the function's type. For example, as the student fills out type A -> B -> C, the editor could be filling in the expression hole with λa -> λb -> ?.

I still think this is a good idea. However, it's something we should only enable at levels well above Beginner level. The reason is that I think it's quite important for students to build their function expressions by hand for quite awhile before we automate this for them. It should arguably not be enabled until the student begins to feel a bit frustrated/bored by the robotic nature of building the expression lambda spines. (When this point occurs could be left to the discretion of the instructor.)

Rare property test failures in EvalFull

I once saw this error on 15dbbd3112bf8b0d954a152d5807521d22bb2fe4.
I'll copy it here and self-assign so it does not get lost (seems to be very rare occurence with our property tests)
Tasks:

  • Fix the bug
  • Improve / add new generators so it gets hit more often
  • add unit test

The full text is at https://gist.github.com/brprice/13058384f13291b6ce31bbfe3e8d17cd
A brutally truncated version is below

test/Test.hs
  Tests
    EvalFull
      resume:                                      FAIL (15.46s)
          ✗ resume failed at test/Tests/EvalFull.hs:371:5
            after 96 tests, 20 shrinks and 771 discards.
          
                ┏━━ test/Tests/EvalFull.hs ━━━
            352 ┃ hprop_resume :: Property
            353 ┃ hprop_resume = withDiscards 1000 $
            354 ┃   propertyWT (buildTypingContext defaultTypeDefs mempty NoSmartHoles) $ do
            ...
            357 ┃     n <- forAllT $ Gen.integral $ Range.linear 2 1000 -- Arbitrary limit here
                ┃     │ 6
            ...
            361 ┃     m <- forAllT $ Gen.integral $ Range.constant 1 (stepsFinal - 1)
                ┃     │ 1
            ...
            371 ┃     set _ids' 0 sFinal === set _ids' 0 sTotal
                ┃     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                ┃     │ ━━━ Failed (- lhs) (+ rhs) ━━━
                ┃     │   Left
                ┃     │     TimedOut
                ...
                ┃     │                   LAM
                ┃     │                     Meta 0 Nothing Nothing
                ┃     │ -                   "a72"
                ┃     │ +                   "a147"
                ┃     │                     Let
                ┃     │                       Meta 0 Nothing Nothing
                ┃     │                       "a2"
                ┃     │ -                     Var (Meta 0 Nothing Nothing) "a72"
                ┃     │ +                     Var (Meta 0 Nothing Nothing) "a147"
                ...

                ┏━━ test/Tests/EvalFull.hs ━━━
            423 ┃ genDirTmGlobs :: PropertyT WT (Dir, Expr, Type' (), M.Map ID Def)
            424 ┃ genDirTmGlobs = do
            425 ┃   dir <- forAllT $ Gen.element [Chk, Syn]
                ┃   │ Chk
            426 ┃   (t', ty) <- case dir of
            427 ┃     Chk -> do
            428 ┃       ty' <- forAllT $ genWTType KType
                ┃       │ TFun
                ┃       │   ()
                ┃       │   (TEmptyHole ())
                ┃       │   (TFun
                ┃       │      ()
                ┃       │      (TEmptyHole ())
                ┃       │      (TApp
                ┃       │         () (TApp () (TEmptyHole ()) (TEmptyHole ())) (TEmptyHole ())))
            429 ┃       t' <- forAllT $ genChk ty'
                ┃       │ Letrec
                ┃       │   ()
                ┃       │   "a"
                ┃       │   (LAM
                ┃       │      ()
                ┃       │      "a2"
                ┃       │      (Let
                ┃       │         ()
                ┃       │         "a4"
                ┃       │         (EmptyHole ())
                ┃       │         (App
                ┃       │            ()
                ┃       │            (EmptyHole ())
                ┃       │            (Letrec
                ┃       │               ()
                ┃       │               "a1"
                ┃       │               (Lam
                ┃       │                  ()
                ┃       │                  "a3"
                ┃       │                  (Let
                ┃       │                     ()
                ┃       │                     "a5"
                ┃       │                     (Letrec () "a6" (EmptyHole ()) (TVar () "a2") (EmptyHole ()))
                ┃       │                     (EmptyHole ())))
                ┃       │               (TEmptyHole ())
                ┃       │               (Hole () (Var () "a"))))))
                ┃       │   (TEmptyHole ())
                ┃       │   (Var () "a")
            430 ┃       pure (t', ty')
            431 ┃     Syn -> forAllT genSyn
            432 ┃   t <- generateIDs t'
            433 ┃   globTypes <- asks globalCxt
            434 ┃   let genDef i (n, defTy) =
            435 ┃         (\ty' e -> Def {defID = i, defName = n, defType = ty', defExpr = e})
            436 ┃           <$> generateTypeIDs defTy <*> (generateIDs =<< genChk defTy)
            437 ┃   globs <- forAllT $ M.traverseWithKey genDef globTypes
                ┃   │ fromList []
            438 ┃   pure (dir, t, ty, globs)

            This failure can be reproduced by running:
            > recheck (Size 66) (Seed 3667079855370498646 17075026488463551245) resume
          
        Use '--hedgehog-replay "Size 66 Seed 3667079855370498646 17075026488463551245"' to reproduce.

In production, ensure we never use a local SQLite database

As of hackworthltd/vonnegut#587, the vonnegut-service will attempt to lookup and parse the DATABASE_URL environment variable if no database command-line flag is provided, and failing that, will fall back on creating a local SQLite database. We do this for reasons described in https://github.com/hackworthltd/vonnegut/pull/587#issue-673784467.

In a proper production service, we should never create a local SQLite database, since those are ephemeral in a container setting. Therefore, it'd be preferable if the service failed in this situation. However, this will require some additional CI/testing work, so we haven't implemented this behavior yet. This issue exists to track this TODO.

Proposal: Introduce eval steps in edit mode

We have some concerns about the understandability of Eval mode. Much of this stems from the fact that we introduce too many concepts at once. In particular, we'd like to introduce the evaluation steps one at a time.

I've been wondering whether we could start by introducing them in edit mode. We could have, for example, a "refactoring actions" menu, clearly labelled with "these actions do not change the meaning of your program", and offering (we may also want inverses of these, but that's less relevant here):

  • Perform this pattern match
  • Inline the value of this variable
    • This could even be useful to explain what a variable is (in the absence of lambdas, at least). We can allow the student to inline the definition, and explain that the new program is equivalent to the old one.
  • Remove this unused binding
    • It'd be nice to also highlight the places where a binding is unused.
  • Beta-reduce this application
  • ... and the others, but only in expert mode (BETAReduction, LocalTypeVarInline, PushAppIntoLetrec)

Then, only once each of these is understood, we can introduce eval mode, which builds on these actions.

I'd been thinking for a while that having the eval steps available in edit mode could be useful for transforming programs. But the possible pedagogical value, as a step towards introducing eval mode, has only just occurred to me.

This could also potentially provide an opportunity to somewhat unify the UIs (and implementation) of Eval and Edit mode. There's also some similarity with @dhess's ideas about wanting a version of Eval where the student "chooses" which action to apply (I can't find an existing issue tracking this EDIT: hackworthltd/vonnegut#639).

* we might want to find less scary words than "refactor", "inline", "equivalent" etc.

Clean up `ActionError` type

There are currently a few TODOs in the ActionError comments, all to do (hah) with making better error types. We should clean this up as part of our backend work over the next few months.

Privilege separation for database operations

Before serving the persistent database, the Vonnegut web service initializes it by creating the required tables. (If the DB has already been created, this is a no-op.) The database privileges required to do this (and later, migrations) are arguably different than the permissions need to update the database, so before we go into proper production, we should probably run these different operations under different DB users.

(This is obviously only applicable to PostgreSQL databases, and not SQLite databases.)

Highlight & promote type-matching constructors in case expressions

In this program:

Screen Shot 2021-07-04 at 2 57 05 PM

if I try to insert True or False here, they should be highlighted, because their types match the expected type, but they're not:

Screen Shot 2021-07-04 at 2 58 06 PM

Type-matching constructors should also be moved to the top of the list of in-scope values. From the above example, it might appear that they are, as True and False are at the top of the list, but here we can see that if we change the expected type of the hole to Nat, Zero and Succ are not at the top of the list:

Screen Shot 2021-07-04 at 2 59 52 PM

Parametric constructors

I can see an argument that constructors which take a parameter (e.g., Succ in the second example shown above) should not be promoted, because their signatures don't match the expected type. Technically, that's correct, but I think in this case, it makes more sense to consider the type they create when looking for matches, rather than their signatures. (We could make a similar argument for functions.)

Backend should be i18n-ready

Our backend should be i18n-ready, so that we can hire translators to provide internationalized strings for error messages and other things we might show to the student.

Add `cabal-edit lint` to linters?

In a similar spirit to #29, we may want to look at adding https://github.com/sdiehl/cabal-edit#lint to check for "common problems with your version bounds and recommend package upgrades when available."

Though it may be awkward to integrate with nix, as it " uses a cache of Hackage package versions internally" which probably needs to be rebuilt (currently it suggests upgrading to the latest version of optics: 0.3, but we are using 0.4...) and this cabal-edit rebuild fails to find a hackage tarball for me.

Install documentation for Haskell dependencies

We enabled doHaddock in 8ae175805d1e43ec88d89ba887b8795cc6f7d2fc, which gives us docs on hover in HLS, and therefore must be passing -haddock to GHC. But it's not clear from the documentation whether it's also supposed to replicate Cabal's --enable-documentation flag, which is required for full Haddock browsing functionality in HLS.

Relatedly, cabal haddock vonnegut (or make docs) produces HTML which doesn't contain any links to definitions from external packages, even base. In cabal-based projects, without Nix, I've found that links to libraries which ship with GHC (base, containers, stm etc.) are always present, and --enable-documentation is required for third-party libs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.