hackworthltd / primer Goto Github PK
View Code? Open in Web Editor NEWA pedagogical functional programming language.
License: GNU Affero General Public License v3.0
A pedagogical functional programming language.
License: GNU Affero General Public License v3.0
This is to track some thinking I'd like to do at some point
I've just had a thought, and this seems like a good a place as any to note it down...
When can holes appear - are they always at change-of-direction, and if so should we merge them?
Originally posted by @brprice in https://github.com/hackworthltd/vonnegut/issues/543#issuecomment-860608436
We should consider adding a sidebar action and a canvas '+' button for lambdas in grouped mode. Probably some other things for consistency - foralls, applications etc?
This should be fairly straightforward once we figure out what constructs need it.
Should we milestone this for the demo?
Once we've finished with the follow-ups to hackworthltd/vonnegut#765 which convert our unit tests to use tasty-hunit
(#801, hackworthltd/vonnegut#802 etc.), we could pass e.g. --hedgehog-tests 100000
to the test executable to run more iterations.
This wouldn't be a good idea before then, since the CLI flag overrides the withTests 1
call in the source code, so we'd end up re-running our unit tests as well. Which is obviously a waste of resources.
This might not be something worth running on every PR. But rather, it could be, say, a weekly background check.
It's likely to be particularly useful after hackworthltd/vonnegut#779, since the generators there seem more likely to have rare corner cases.
When doing hackworthltd/vonnegut#800, I found there is problem with doing this it EvalFull: our term generator are rather discard-happy for the APP case. This is ok when just generating 1 term, but when evaluating in a context with a bunch of globals, we get a multiplicative knockdown in productivity. I have an idea how to improve it, but am spinning that off into another PR.
The idea is to improve how we generate t @T ∈ S
(for given S
)
Currently we generate t ∈ ∀a._
, and hope that there is some T
that makes the types work out. This is fairly rare!
Instead, we could generate versions of S
with some subtrees replaced with a
(pick any one will be fine, but also chances for zero, and the whole tree, and multiple identical subtrees), and generate t ∈ ∀a.S'
. This will (I think) go much better, because we can always just generate t=? : ∀a.S
.
[I tried to change the APP case to try 100 times and then fall back to generating an empty hole, but this lead to extremely slow generators: I assume because there is a decent chance of generating 3 (say) nested APPs. Previously this would retry 100 times and then give up, but now it would try 100 times and fall back to a hole nested for 100^3 times!]
Imagine we have this scenario:
If we had a slight twist on the action implemented by hackworthltd/vonnegut#815, described as, say, "Apply a function to this argument," then we could select f
in the above expression for testList
, apply map
to it, and arrive at this in one step:
This would be very useful!
In cases of multi-ary functions like map
, we could just apply the selected argument at the first argument position, or, even better, the first argument position where it fits the type signature of the function you're applying to it.
Relevant comments from the hackworthltd/vonnegut#815 discussion:
https://github.com/hackworthltd/vonnegut/pull/815#issuecomment-905540391
https://github.com/hackworthltd/vonnegut/pull/815#issuecomment-905573472
(This is a tracking issue.)
First we need to decide on Primer 1.0's semantics and write down some operational semantics, in order to provide a written specification. Then we need to update the backend so that it's a valid implementation of that specification, or as close as we can feasibly get.
This is a placeholder for now. I should come back and fill this in.
There was some discussion on hackworth.dev on 1st july, and there is some discussion in the commit "some notes on being careful about syn/chk " 1d2c8d1f791aa7f19a8e8d20876db7f53a01deaa (I will copy this into a comment below, so it does not get lost during a rebase)
This is a feature request.
<Include a brief description here.>
<Link to other GitHub Issues or PRs that must be completed before work can begin on this feature.>
<Describe what the feature will do, from a high-level perspective.>
<If there are any particularly tricky implementation bits that are worth discussing or you haven't quite figured out yet, describe the details here.>
<Describe any features that your implementation is explicitly avoiding, that a reasonable person might think should be in spec. For example, if you're adding a new action that operates on variables, but it only works with term variables and not type variables, you might want to mention it here so that the scope of the feature is clear.>
<If there's a GitHub Discussions topic where this feature has been/is being discussed, link it here.>
<Describe ideas or additional features that might be useful once this feature has been implemented. This is a good place to link to other GitHub Issues or PRs that track this future work.>
When trying to do an insert action with multiple children (e.g. application) on not-a-hole, there are multiple things one could mean (one per child of the inserted node) e.g.:
t ~> t $ ?
t ~> ? $ t
[This happens with more than just application, but I'll restrict to that case for terminological reasons]
It would be good to make it clear which will happen, and maybe even give an option.
This could be done
apply this to a new argument
, apply a new function to this
. This seems a bit rubbish - why have to separate buttons for things which are so clearly related. . .
/ \ / \
* ? ? *
Insert an application
keep this as fn | keep this as arg
Insert an application
.
/ \
o o
o
. Hovering over one will convert the pictogram into the form above with a *
where you hover (where the current tree will go) and a ?
where the new hole will goWe have a core constructor with no typechecking rules: LetType
!
Unfortunately, since we have a catch-all case in synth
, we don't get the compiler yelling at us.
Even worse, this catch-all is supposed to catch checkable-only things, and it adds an annotation and tries again.
This means that synthesising the type of a LetType
will always loop (with SmartHoles on)!
Thankfully, this hasn't bitten us yet since they only show up inside evaluation (the user cannot construct them manually), and we do no typechecking of the output of the evaluator.
There is an easy proximal fix: add TC rules (to both synth
and check
: follow Let
/LetRec
). This will also need a choice on exactly where the bound variable is in scope.
The lets-avoid-similar-incidents-in-the-future fix is less obvious to me: I'd really like OCaml-style multi-matches, so we can change that catch-all into catch lam
/LAM
/case
/...(?) without it being really awkward. Does anyone have a good idiom here? I guess we could factor out the code into a ... where default = ...
and just do
synth e@Lam{} = default e
synth e@LAM{} = default e
synth e@Case{} = default e
...
but that is not very satisfying.
Details: https://cs-syd.eu/posts/2021-09-11-json-vulnerability?source=reddit
Tracked upstream here: haskell/aeson#864
Reddit thread: https://www.reddit.com/r/haskell/comments/pm7rcr/cs_syd_json_vulnerability_in_haskells_aeson/
The vulnerability is via HashMap
from https://hackage.haskell.org/package/unordered-containers, which we do not use (directly). We should consider banning the use of unordered-containers
, and hashable
(https://hackage.haskell.org/package/hashable) until this issue is addressed — if it ever is.
(It's not clear to me whether the maintainers mentioned in the disclosure post, who apparently have known about this issue for ~ 1 year, are the aeson
maintainers, or the unordered-containers
maintainers, or the hashable
maintainers, upon which unordered-containers
depends.)
Blocked on:
Will fill in details later.
Once hackworthltd/vonnegut#738 is merged, I think there's a case to be made that the new f $ ?
saturated function action should replace the bare $
action for empty holes. The two only cases where I can think of bare $
being what you want is when you haven't written the function yet that you want to use in the hole, or if your function is relatively simple (e.g., no foralls), you want to partially apply it, and raising it will be more hassle than just doing the applications yourself. However, I hope that this latter case will eventually be subsumed by better inference where the action can tell how many applications to use based on the type of the hole, and/or by the "phantom application" approach as described in #601.
At expert level , I suppose there's no downside to presenting both actions. (In the long term, we could measure usage and eliminate it if f $ ?
ends up being used much more often.)
However, at beginner level, I prefer having fewer choices in order to reduce the cognitive load, so long as the choices presented are sufficient for a beginner to write any plausible beginner program.
We should still offer $
for non-holes, of course. For example, if the cursor is on some function variable f
, you may want to $
apply it.
We should consider (especially in light of our plans to make the new frontend completely dumb) to have a mode that introduces extra lag when responding to requests. This should emulate the experience of interacting with vonnegut over the internet rather than locally on a dev/testing machine and give us a better idea whether we can really offload even interactive things to the backend.
This may well be possible by running behind some sort of proxy, and is a fairly common tool. Drew mentioned that it is an option in Apple's workflow for developing iPhone/iPad apps, Firefox has an option to throttle internet connection in its dev tools, I know linux software raid dmsetup
has an similar option for simulating failing HDDs. Hopefully there is at least a decent body of knowledge we can draw on, if not an off-the-shelf solution.
Originally posted by dhess June 30, 2021
Generally speaking, I really like the reduction rule in our evaluation semantics that converts applications to let bindings, as this rule obviates the need for environments (which are not first class in any programming language I'm aware of). However, in Keybase chat today, I pointed out the following limitation of this rule. Consider the following evaluation step in the application of map
to a list:
Now let's evaluate the APP
redex such that we get a lettype β = Nat in ...
in its place:
The problem here is that I have to reduce the lettype β = Nat in ...
for all occurrences of β
in the subtree under the lettype
before I can perform the function application (λf. ...) (λy.Zero)
. So if I want to mimic the call-by-name evaluation strategy, I can't. I can effectively only do call-by-value.
(Here I use our slightly odd lettype
form, but the same goes for let
s for term bindings.)
I proposed that, instead of the current rule which replaces the application with a single lettype
at the application location in the tree, we push the lettype β
down to each occurrence of β
in the subtree, which would allow the student to explore different evaluation strategies.
@georgefst pointed out that in Harry's original "Evaluation steps in Vonnegut" document in Craft (the Apple URI scheme link to the relevant section of that document is craftdocs://open?blockId=CECD7E89-232B-4F44-9A6F-53DF188B9212&spaceId=8b43f204-aeeb-b8ce-45cc-73a653745299), under the "Making substitution incremental" heading, the major rationale for this reduction rule was to not change too much in the tree at once, making application easier to follow; or, as @georgefst put it, "it keeps the evaluation step small and local." By pushing the let
or lettype
down to each occurrence of the bound variable, we would be harming this nice pedagogical affordance.
@brprice then had the really interesting thought that by providing another reduction rule that "floats let
s downwards" towards their occurrences, and changing the rule that allows let
s to be eliminated once there are no more uses in their subtree to something more like, "a let can be eliminated once it's adjacent to its bound variable's use" (or similar), the student could explore these ideas themselves, all while following our small-step principles of evaluation.
Obviously after some deliberate practice and eventual understanding of this small-step let elimination, we'd probably want to change the rules to something more like what we have now, to keep this from being too tedious. (@georgefst raised the possibility of allowing the student to drag the let
down the tree to its occurrence, to match it up and eliminate it in one gesture.) We could even eventually remove the "function-application-as-let-binding" rule and simply do substitution in one go.
We should add (something like) hpc
(and the equivalent for our frontend) to CI to monitor our test coverage.
E.g. https://github.com/hackworthltd/vonnegut/pull/779#discussion_r692359627, https://github.com/hackworthltd/vonnegut/pull/779#discussion_r692315167
We are testing either events that "should be rare", or events that the generator finds hard to sample from, and so have to crank up the allowed number of discards. We should perhaps improve the generator, or make a more targeted generator? Since CI times are only impacted by a few seconds, we have currently commited the tests.
Right now, hackworthltd/vonnegut#765 is failing because of some HLint warnings about unused extension pragmas. I think these can be useful for some extensions that you really don't want to enable globally (CPP
, TemplateHaskell
, maybe UndecidableInstances
...), so I wouldn't want to disable them, but for others it's just busywork.
It's particularly annoying as they're not reported by HLS for some reason (haskell/haskell-language-server#2042). Otherwise I would have caught the issue before committing.
Anyway, shall we just expand the default-extensions
list in our cabal files? Currently we have the following enabled via pragmas, and I don't see any harm in enabling them everywhere:
One annoying thing is that we currently have five identical default-extensions
lists, for our various components. We could use common
stanzas in the cabal files to bring this down to three, but it's still less than ideal. (Roll on GHC2021
...)
Whatever we do for hackworthltd/vonnegut#308, we should consider mirror for function application. The problem isn't as dire for application, as we already have a working implementation (modulo cursor movement) where the user can highlight the root App
node and press the $
action button as many times as they need arguments, but the UX isn't particularly clear. By adding a +
button inline and an explicit "Add an argument" action button, as is currently planned in hackworthltd/vonnegut#308, we could probably improve things.
We don't (even after hackworthltd/vonnegut#175) remove holes such as
{? Succ ?} Zero
because we don't have enough information to see if the hole is "finishable" at the point we handle the hole. In general, if a hole is in a synthesis position, we don't know where else downstream it is being referred to, so we dare not change its type.
I don't know how to do better here!
I also don't know how common this situation is in practice, so have no idea if this should be high priority or not.
I think there's some small changes to the cursor location that will improve the user experience a bit:
→
), leave the cursor on the LHS of the arrow. This makes it quick to flesh out the arrow type further.f
to f ?
, and the next thing you want to do is put something in the ?
.We have a unit test for evaluation which reads letrec x:Bool=x in x
takes 100 steps (in a synthesisable context) and times out giving (letrec x:Bool=x in (x:Bool)):Bool
. This has two annotations one may wish to remove. Can/should we do better here? (NB: it is not entirely obvious that we are not doing better already, as this is an arbitrary timeout, not a normal form, so potentially we remove one on the next step. However this does not happen.)
The reduction sequence is, writing [_]
for the elided embeddings of synthesisable terms into checkable terms (this sequence would be a bit shorter if we reduced letrec x:T=t in C(x)
to letrec x:T=t in C((letrec x:T=t in t):T)
, rather than letrec x:T=t in C(letrec x:T=t in (t:T))
. Maybe this would be worth doing regardless of any other decision)
letrec x:Bool=[x] in x
letrec
, adding an annotation to ensure it is synthesisable: letrec x:Bool=[x] in (letrec x:Bool=[x] in ([x] : Bool))
letrec x:Bool=[x] in ([x] : Bool)
[letrec x:Bool=[x] in ([x] : Bool)] : Bool
[[letrec x:Bool=[x] in ([x]:Bool)] : Bool] : Bool
[letrec x:Bool=[x] in ([x]:Bool)] : Bool
There are two annotations that maybe we should remove somehow.
[e]:T ~> e
, that would work, but I don't know whether this is a sane rule. I'm worried about confluence in particular. I have seen McBride talk about the upsilon rule we have many times, but very rarely about this one, indicating that it is probably problematic. This is the only way that I can see the outer annotation disappearing.letrec
. We could either "push the letrec inside the annotation" hackworthltd/vonnegut#771, or change our reduction rule of inlining letrec to put the letrec inside the annotation as suggested above. Both of these are pretty innocuous and may be a nice, easy, improvement.Consider this expression:
I want to insert a let
above the match x with
node such that the expression reads:
λp. let x = ? in match p with ...
but we don't support this at the moment. I think @brprice has mentioned wanting something similar previously, and possibly wanted to support putting the target expression on the left-hand side of the let
, as well?
In React, we'll probably want to use the browser's pushState API so that the back and forward buttons work as expected. According to the React docs, to make this work with Servant, it sounds like we'll need to configure the backend service to respond with /index.html
for any route it doesn't otherwise understand.
One thing I am paranoid about is messing up the renaming of tests and failing to auto-detect them. One way to calculated the number is by cabal run vonnegut-test -- -l | wc -l
.
I wonder if we can add something similar to CI that will warn us if the number of tests goes down - probably indicates auto-detection has been confused? (Maybe just a bot that can comment what the change in the number of tests, and later in the test coverage is)
Originally posted by @brprice in https://github.com/hackworthltd/vonnegut/issues/801#issuecomment-902042693
We should ensure that (both) evaluator(s) do sane things to the metadata, if we want to allow the user to interact with a reduced tree. (i.e. show the type of nodes - this relies on metadata).
One option is to run a full TC pass after each step, but this is a rather sledgehammer approach
It's time we start writing proper Haddocks for all new Haskell code. I propose that we have a flag day sometime soon and start requiring them for all new code that warrants them.
Also, I propose that we adopt a policy of writing proper Haddocks whenever we make a change to an existing top-level definition, so that over time, we can retroactively add them to existing code.
Any objections?
We have it on the term level thanks to hackworthltd/vonnegut#738 and hackworthltd/vonnegut#743, but not at the type level.
List ?
)Note that we can't (easily) do the equivalent of hackworthltd/vonnegut#712, as we only do synthesis for types, though we could with a bit more work.
This is misleading, and I have to jump through hoops to get the local context. We should be able to make a better abstraction than this! (I'm sure this has been mentioned before somewhere)
Originally posted by @brprice in https://github.com/hackworthltd/vonnegut/pull/697#discussion_r664866758
(See the original comment for context and pointing out where in the code this bites)
Our evaluator should not eliminate let
s until they're "pushed down" to their use sites. This would permit more flexible interactive evaluation strategies in eval mode (see discussion in https://github.com/hackworthltd/vonnegut/discussions/638). It might also potentially simplify the most complicated bit of the current evaluator — see https://github.com/hackworthltd/vonnegut/pull/768#discussion_r678488543.
In order to preserve the "small local changes" spirit of our reduction rules in the current eval mode, we should probably implement this as @brprice suggested (and I documented in https://github.com/hackworthltd/vonnegut/discussions/638), such that let
s float down towards their occurrences. @georgefst also suggested that we could add an affordance to allow students to click-drag a let
to each of its use sites before eliminating it, which seems like a good idea, though it should probably be part of a suite of similar interactions, rather than just a special case interaction for eliminating let
s.
But let's start simple. I propose that we change the current let
elimination rule so that a let
can only be eliminated once it's adjacent to the use of the bound variable, and that we add a "push down" rule that moves a let
one step closer to its use(s), splitting the let
into multiple equivalent let
s when more than one child contains a use . If the latter is too annoying in practice, we can try some alternate approaches, such as @georgefst's suggestion, or just a macro step that pushes a let
all the way down to all of its bound variable's occurrences in one go.
(One special case that occurs to me as I write this: Imagine we have const x y = x
and we evaluate const 3 2
giving let x = 3, y = 2 in x
— we need to be able recognize and step-eliminate the let y = 2
in this case.)
See #28 for an example of why this feature would be useful.
The current evaluator allows you do to this:
letrec f = λx. f x in f Zero
==> letrec f = λx. (λx. f x) x in f Zero
==> letrec f = λx. f x in f Zero
==> letrec f = λx. (λx. f x) x in f Zero
==> letrec f = λx. f x in f Zero
...
This could be an educational example of unbounded recursion, or it could be annoying.
(Originally posted here: https://github.com/hackworthltd/vonnegut/pull/815#issuecomment-904778746)
With hackworthltd/vonnegut#815, we can go from this:
to this, in just one step, by using the saturated application action using map
:
It would be useful if, when inserting a value into a hole, we check whether it's part of an application spine and, if so, apply this action again to see if we can refine the type further. For example, assume I have some f : Nat -> Bool
and I insert it into the HoF hole of map
; then it would be great if we could automatically get this in just one step ("Use a variable" action with f
):
In one of the expert user testing sessions, the user suggests it would be nice to have integers and arithmetic operators (+,-,x, /) to write and evaluate programs.
Do we want to support those? I have spoken to Drew, and he agreed that it would be nice and presumably feasible to support integers. How about arithmetic operators?
I've been pushing for awhile for some editor smarts that would automatically build a function's expression lambdas as the student creates the function's type. For example, as the student fills out type A -> B -> C
, the editor could be filling in the expression hole with λa -> λb -> ?
.
I still think this is a good idea. However, it's something we should only enable at levels well above Beginner level. The reason is that I think it's quite important for students to build their function expressions by hand for quite awhile before we automate this for them. It should arguably not be enabled until the student begins to feel a bit frustrated/bored by the robotic nature of building the expression lambda spines. (When this point occurs could be left to the discretion of the instructor.)
The initial version should explain or mention:
Atom has a good one, for reference: https://github.com/atom/atom/blob/master/CONTRIBUTING.md
Depends on #143.
I once saw this error on 15dbbd3112bf8b0d954a152d5807521d22bb2fe4.
I'll copy it here and self-assign so it does not get lost (seems to be very rare occurence with our property tests)
Tasks:
The full text is at https://gist.github.com/brprice/13058384f13291b6ce31bbfe3e8d17cd
A brutally truncated version is below
test/Test.hs
Tests
EvalFull
resume: FAIL (15.46s)
✗ resume failed at test/Tests/EvalFull.hs:371:5
after 96 tests, 20 shrinks and 771 discards.
┏━━ test/Tests/EvalFull.hs ━━━
352 ┃ hprop_resume :: Property
353 ┃ hprop_resume = withDiscards 1000 $
354 ┃ propertyWT (buildTypingContext defaultTypeDefs mempty NoSmartHoles) $ do
...
357 ┃ n <- forAllT $ Gen.integral $ Range.linear 2 1000 -- Arbitrary limit here
┃ │ 6
...
361 ┃ m <- forAllT $ Gen.integral $ Range.constant 1 (stepsFinal - 1)
┃ │ 1
...
371 ┃ set _ids' 0 sFinal === set _ids' 0 sTotal
┃ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
┃ │ ━━━ Failed (- lhs) (+ rhs) ━━━
┃ │ Left
┃ │ TimedOut
...
┃ │ LAM
┃ │ Meta 0 Nothing Nothing
┃ │ - "a72"
┃ │ + "a147"
┃ │ Let
┃ │ Meta 0 Nothing Nothing
┃ │ "a2"
┃ │ - Var (Meta 0 Nothing Nothing) "a72"
┃ │ + Var (Meta 0 Nothing Nothing) "a147"
...
┏━━ test/Tests/EvalFull.hs ━━━
423 ┃ genDirTmGlobs :: PropertyT WT (Dir, Expr, Type' (), M.Map ID Def)
424 ┃ genDirTmGlobs = do
425 ┃ dir <- forAllT $ Gen.element [Chk, Syn]
┃ │ Chk
426 ┃ (t', ty) <- case dir of
427 ┃ Chk -> do
428 ┃ ty' <- forAllT $ genWTType KType
┃ │ TFun
┃ │ ()
┃ │ (TEmptyHole ())
┃ │ (TFun
┃ │ ()
┃ │ (TEmptyHole ())
┃ │ (TApp
┃ │ () (TApp () (TEmptyHole ()) (TEmptyHole ())) (TEmptyHole ())))
429 ┃ t' <- forAllT $ genChk ty'
┃ │ Letrec
┃ │ ()
┃ │ "a"
┃ │ (LAM
┃ │ ()
┃ │ "a2"
┃ │ (Let
┃ │ ()
┃ │ "a4"
┃ │ (EmptyHole ())
┃ │ (App
┃ │ ()
┃ │ (EmptyHole ())
┃ │ (Letrec
┃ │ ()
┃ │ "a1"
┃ │ (Lam
┃ │ ()
┃ │ "a3"
┃ │ (Let
┃ │ ()
┃ │ "a5"
┃ │ (Letrec () "a6" (EmptyHole ()) (TVar () "a2") (EmptyHole ()))
┃ │ (EmptyHole ())))
┃ │ (TEmptyHole ())
┃ │ (Hole () (Var () "a"))))))
┃ │ (TEmptyHole ())
┃ │ (Var () "a")
430 ┃ pure (t', ty')
431 ┃ Syn -> forAllT genSyn
432 ┃ t <- generateIDs t'
433 ┃ globTypes <- asks globalCxt
434 ┃ let genDef i (n, defTy) =
435 ┃ (\ty' e -> Def {defID = i, defName = n, defType = ty', defExpr = e})
436 ┃ <$> generateTypeIDs defTy <*> (generateIDs =<< genChk defTy)
437 ┃ globs <- forAllT $ M.traverseWithKey genDef globTypes
┃ │ fromList []
438 ┃ pure (dir, t, ty, globs)
This failure can be reproduced by running:
> recheck (Size 66) (Seed 3667079855370498646 17075026488463551245) resume
Use '--hedgehog-replay "Size 66 Seed 3667079855370498646 17075026488463551245"' to reproduce.
As of hackworthltd/vonnegut#587, the vonnegut-service
will attempt to lookup and parse the DATABASE_URL
environment variable if no database command-line flag is provided, and failing that, will fall back on creating a local SQLite database. We do this for reasons described in https://github.com/hackworthltd/vonnegut/pull/587#issue-673784467.
In a proper production service, we should never create a local SQLite database, since those are ephemeral in a container setting. Therefore, it'd be preferable if the service failed in this situation. However, this will require some additional CI/testing work, so we haven't implemented this behavior yet. This issue exists to track this TODO.
Currently we only do literal equality, but we should at least do up-to-alpha equality. I'm not sure what the current state wrt holes in types and highlighting variables to insert is, but we should look at that as well
We have some concerns about the understandability of Eval mode. Much of this stems from the fact that we introduce too many concepts at once. In particular, we'd like to introduce the evaluation steps one at a time.
I've been wondering whether we could start by introducing them in edit mode. We could have, for example, a "refactoring actions" menu, clearly labelled with "these actions do not change the meaning of your program", and offering (we may also want inverses of these, but that's less relevant here):
BETAReduction
, LocalTypeVarInline
, PushAppIntoLetrec
)Then, only once each of these is understood, we can introduce eval mode, which builds on these actions.
I'd been thinking for a while that having the eval steps available in edit mode could be useful for transforming programs. But the possible pedagogical value, as a step towards introducing eval mode, has only just occurred to me.
This could also potentially provide an opportunity to somewhat unify the UIs (and implementation) of Eval and Edit mode. There's also some similarity with @dhess's ideas about wanting a version of Eval where the student "chooses" which action to apply (I can't find an existing issue tracking this EDIT: hackworthltd/vonnegut#639).
* we might want to find less scary words than "refactor", "inline", "equivalent" etc.
https://github.com/kowainik/stan
I'm not sure how feasible this is to integrate into our Nix setup for linting, but it would be nice to have.
There are currently a few TODOs in the ActionError
comments, all to do (hah) with making better error types. We should clean this up as part of our backend work over the next few months.
Before serving the persistent database, the Vonnegut web service initializes it by creating the required tables. (If the DB has already been created, this is a no-op.) The database privileges required to do this (and later, migrations) are arguably different than the permissions need to update the database, so before we go into proper production, we should probably run these different operations under different DB users.
(This is obviously only applicable to PostgreSQL databases, and not SQLite databases.)
In this program:
if I try to insert True
or False
here, they should be highlighted, because their types match the expected type, but they're not:
Type-matching constructors should also be moved to the top of the list of in-scope values. From the above example, it might appear that they are, as True
and False
are at the top of the list, but here we can see that if we change the expected type of the hole to Nat
, Zero
and Succ
are not at the top of the list:
I can see an argument that constructors which take a parameter (e.g., Succ
in the second example shown above) should not be promoted, because their signatures don't match the expected type. Technically, that's correct, but I think in this case, it makes more sense to consider the type they create when looking for matches, rather than their signatures. (We could make a similar argument for functions.)
Once hackworthltd/vonnegut#738 is merged, we'll need some comprehensive tests for saturatedApplication
.
see also hackworthltd/vonnegut#721.
Our backend should be i18n-ready, so that we can hire translators to provide internationalized strings for error messages and other things we might show to the student.
In a similar spirit to #29, we may want to look at adding https://github.com/sdiehl/cabal-edit#lint to check for "common problems with your version bounds and recommend package upgrades when available."
Though it may be awkward to integrate with nix, as it " uses a cache of Hackage package versions internally" which probably needs to be rebuilt (currently it suggests upgrading to the latest version of optics: 0.3, but we are using 0.4...) and this cabal-edit rebuild
fails to find a hackage tarball for me.
We enabled doHaddock
in 8ae175805d1e43ec88d89ba887b8795cc6f7d2fc, which gives us docs on hover in HLS, and therefore must be passing -haddock
to GHC. But it's not clear from the documentation whether it's also supposed to replicate Cabal's --enable-documentation
flag, which is required for full Haddock browsing functionality in HLS.
Relatedly, cabal haddock vonnegut
(or make docs
) produces HTML which doesn't contain any links to definitions from external packages, even base
. In cabal-based projects, without Nix, I've found that links to libraries which ship with GHC (base, containers, stm etc.) are always present, and --enable-documentation
is required for third-party libs.
This is mostly a placeholder for now, pending discussion in #740, so that we can start adding this issue to code comments marked TODO
in reference to the smart hole toggle.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.