GithubHelp home page GithubHelp logo

popl2022's People

Contributors

mengwangoxf avatar rolyp avatar tpetricek avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

popl2022's Issues

Draft 2 of sections 1-4

Done/dropped:

  • 3 intro: say what this section/definitions/results will achieve
  • 3.1: why we define Sel_v{A}
  • 3.2: revisit use of "interpret"
  • 3.2: move GC motivation here from 3.5? (but also see 3 intro)
  • 3.3: hole rule for forward-eval needs proper motivation
  • 3.3: significance of mononicity theorems needs to be eastablished earlier (in 3 intro or 3.2)
  • 3.5: intuitions about forward-eval need to be justified in terms of intuitions presented in section 1
  • 2 intro: should motivate use of eliminators (and simplify/clarify what's already there)
  • 2 intro: mention closures?
  • 3.2: introduce forward/backward match/eval symbols here?
  • 3.3: clarify use of grey subscripts in rules
  • 3.3: widespread use of \leq etc on pairs needs signposting early
  • 4: catchier title (suggestive of calculus of analyses -- maybe something about modalities of dependency)?
  • 4 intro: stronger sell for idea of reusing/combining analyses from section 3
  • 4.1: maybe the De Morgan dual figure could come later (at least, ask reader to ignore Fig. 11b initially)
  • 4.1: state that we derive what is "necessary" from what is "sufficient" up front, and relate to linking?
  • fit into 21 pages

Turing practice talk

Switch to Keynote instead of iPad.

  • Keynote import (with videos)
    • title slide
    • provenance-aware data visualisation
    • Goal 1 (rework as use case – see above)
    • Goal 2 (rework as use case – see above)
    • prior work 1: Galois slicing (drop)
    • prior work 2: Galois connections (rework)
    • prior work 3: Differential slicing (drop)
    • prior work 4: Differential slicing (merge with previous slide and rework)
    • Galois dependencies (rework as “requirements” slide – see above)
    • representing selections
    • forward and backward dependency analysis
    • Demo 1: 1 (simplify, merge into use case 1?)
    • demo 1: 2 (drop)
    • Goal 2 exposition
    • De Morgan duals 1
    • De Morgan duals 2
    • Demo 2 (merge into goal 2 – use to motivate explorable explanations)
    • conclusion (move to beginning))
    • closing slide
  • summary slide explaining two use cases
    • programmer just writes regular (functional) code
    • link to data for transparency
    • link outputs to other outputs as comprehension aid (view coordination)
    • convolution example – explorable explanations
  • Demo 1 (linked charts)
    • code slide
  • Demo 2 (convolution)
    • code slide
  • Phil W: why linking important? relevance of meet/join preservation?
  • Simon F: say what you need/why prior work doesn’t have it; otherwise kill most of prior work section
  • James McK: pare back/drop the goal slides in favour of the videos?
  • dry run (26 mins)
  • why is this a problem? slide back to the end
  • our own programming language
  • “baked in” in bold?
  • properties we care about -> desiderata
  • rehearse from “use cases summary” to “problem spec”
  • preliminaries
    • set Keynote to present from laptop screen
    • mic on and nearby

5-min promo video

The authors will be asked to prepare the video recording
of a 5-minute talk advertising their paper by January 2, 2022,
two weeks before POPL starts. These 5-minute talks will be
made available to the participants of both in-person and
virtual versions of the meeting in advance.

The precise instructions on this will be sent sometime in
December. I am sending this reminder so that you can make
a plan on this video recording.

Haskell-style arrays

Arrays with an API similar to Haskell arrays (except they’ll be strict, and not polymorphically typed).

  • array types
  • array values
  • array expressions:
    • creation
    • access
    • size
  • trace forms for the above
  • typing rules
  • evaluation rules
  • forward slicing rules
  • backward slicing rules
  • complete missing expression annotations in fwd/bwd rules
  • revisit syntax
  • array update?
  • move structured let from appendix to surface language
  • match..as should only allow a single pattern in each clause (currying not permitted)
  • rename range to enumFromTo, a la Haskell

POPL talk

  • keyboard batteries
  • redo movies for better quality
    • linked charts
    • convolution (just resize)
  • bottom of movies hidden by playback controls -> disable controls
  • Keynote: disable click to advance ❌ not possible
  • configure Keynote slide show (what will be on what screen)
  • cursor effect options
  • first pass over presenter notes
    • title slide
    • Provenance-aware data visualisation
    • use case summary slide
    • Use case 1
    • Use case 2
    • Matrix convolution
    • Linked charts
    • Problem spec -- see below
    • Galois connections
    • Representing selections
    • Linking selections
    • Linking selections between two outputs
    • De Morgan duals 1
    • De Morgan duals 2
  • first pass over presenter notes
    • How can we improve on this picture?
    • Thank you!
  • prior work on differential Galois slicing (ignoring “context”) – still not quite right
  • Problem spec
    • ability to focus on substructure of a larger structure
    • round-trip properties for “reproducibility”
    • relation for linking not the same as the “required” relation
    • only really meaningful if we can produce a computation/explanation, but a start
    • meet/join preservation a key intuition (but also the key mathematical property)
    • prior work on Galois slicing – appropriate round-tripping properties but otherwise unsuitable
    • presenter notes
  • dry run 1 (12.5 mins)
  • final tweaks (try to extend)
    • more sentence/paragraph breaks in presenter notes to slow things down
    • demo 1
      • slower (revert to x4)
      • explain join preservation using USA/China
      • sync up presenter notes
    • demo 2
      • say what we're convolving with what
      • explain boundary behaviour
      • explain zero behaviour
      • round-tripping: induced selection should be in a different colour
      • unpack wrapping a bit
      • we talked about join preservation and monotonicy in the last example..
    • matrix convolution
      • implemented as a pure function, no logic about interaction
      • we have our own programming language for doing this
    • linked charts
      • for example in a notebook
      • language runtime takes care of tracking dependencies and linking between input and output
    • problem spec
      • round-tripping property, or one of the round-tripping properties, is going to mean...
      • "sufficiency" -> fine-grained notion of reproducibility
      • Galois slicing lacks the ability to select with a structured value like this
      • additional animation to show naive attempt and explain what goes wrong
      • cite prior work
      • might be tempted to reuse the requires/sufficientFor relation on the left
      • different modality
    • Galois connections
      • upper adjoint/lower adjoint
      • GCs compose component-wise, so useful to think of them as having a direction
      • arbitrarily, pick direction of lower adjoint as direction of GC
      • near inverses: lower adjoint is closest approximation to an inverse from below, ...
    • Representing selections
      • improve explanation of "shape"
      • for example we can use a vector of bits to represent multiple selections simultaneously
    • Linking selections
      • going forward, selections represent availability
      • going backward, selections represent demand
    • De Morgan duals 2
      • ..and the result will in general preserve neither meets nor joins
    • How can we improve on this picture?
      • different title?
    • Linking selections between two outputs
      • so given this kind of analysis, how can we link selections between two outputs, given that...
      • data structure -> annotated value
    • De Morgan duals 1
      • contravariance is going to be key to implementing brushing and linking
  • dry run 2 (19 mins)
  • final, final tweaks
    • prerecorded..prerecorded
    • left -> right
    • from the data viz literature..when there are two views of the same data
    • CONCUR '16
    • re-sync demo 1 presenter notes
    • associated -> demanded
    • preserves joins -> preserves unions
    • better flow into Galois connections slide
    • tweak convolution text
    • tweak problem spec (reusing solution)
    • tweak GC slide (drop name of functions in lower adjoint/upper adjoint)
      Before talk:
  • log into Airmeet
  • cursor effect enabled?
  • slides ready to play on big monitor, presenter notes to left
  • mic on and nearby

Environment lookup lemma

Some to dos:

  • In the statement of the lemma on p.13, change the bulleted list to a numbered list for easy reference.
  • In the proof, make it clear which part of the theorem you’re proving (if the theorem has multiple parts).
  • To prove an implication A -> B you can suppose A and then deduce B – make sure that this structure is clear. So you should start with “suppose 𝜌 ◃_Γ (𝑧:𝑣)”.
  • Make it clear what the induction is “on” (the structure that is getting smaller at each step) – in this case the proof that 𝜌 ◃_Γ (𝑧:𝑣). This is also the subject of the case analysis.
  • Add rule names to environment lookup/lookupBwd definitions so that it’s always clear what rules are being used to pattern-match or construct proofs (lookup-head/lookup-tail might work or whatever you think conveys a reasonable intuition).
  • In the second case, the Γ subscript in the premise should be Γ’.
  • Also in the second case, you need to explain where the premise of the ▹ proof comes from. How do you know 𝜌′ ▹ (𝑧 : 𝑣)? That’s where you need the inductive hypothesis, which is a function of type (forall 𝜌 Γ 𝑧 𝑣 . 𝜌◃Γ (𝑧:𝑣) -> 𝜌▹(𝑧:𝑣). You will be “invoking” that function at a strictly smaller 𝜌, in this case 𝜌′. You need to establish 𝜌′ ▹ (𝑧 : 𝑣) using the IH and then use that fact to construct the (𝜌 ·(𝑧 :□))▹(𝑧:𝑣) derivation.
  • I suggest omitting the closing \intertext{\crossrule} and ending on \notag.

Also: it can be useful to restate the theorem (as you’ve done) as the point at which you present the proof. It’s also a pain to have to keep two copies of the theorem in sync. There’s a LaTeX package that makes it easy to restate lemmas; I’ll look into setting this up.

Initial pass over presentation

Skeleton of talk for practice run at PLUG, Tue 11 Jan.

Opening slide

  • title; authors + photos

Problem

  • understanding visual outputs means understanding what visual elements represent
    • examples: various typical (= opaque) charts (see HPI talk)
  • can we do better?
    • open source/open data help provide the raw materials but no insights into how things are related
    • what we would like is computational artifacts able to reveal their relationship to data interactively, on demand (without incurring programmer overhead)
  • two specific problems we would like to make progress towards; first is linking outputs to inputs in a fine-grained way
    • author just writes regular functional viz code -- bar chart example
    • "round-tripping" properties important for trust/correctness (here: sufficiency)
  • second is linking outputs to other outputs in a fine-grained way (and seeing the relevant data) -- line chart example
    • for the forward analysis here, we care about necessity, not sufficiency

Solution (part 1)

  • prior work: Galois slicing provides a way to relate input selections to output selections, with nice round-tripping properties characterised by Galois connections
  • Galois connection: best way to (nearly) invert a map that can't be inverted
    • backwards question: what is the (least) input needed for this output?
    • forwards question: what is the (greatest) output that this input suffices for?
  • in Galois slicing, a selection of an input/output is a prefix (tree with holes)
    • example from Fig. 16
  • however, doesn't support isolating arbitrary components of structured outputs
    • differential slicing helps, but wrong approximation for this problem (example from Fig. 16 -- explain "spine")
  • Galois dependencies
    • lift prefix-closure restriction, so "selections" are not just terms with holes; selections are just subsets of the set of paths in a term
    • represent by associating a bit with each path (more generally: elements of boolean algebras) -- give grammar of values
    • less "program-like", but more fine-grained
  • forward/backward analyses
    • form/meaning of the judgements
    • also introduce hole as shorthand for .. (relevant to environment merge during backward analysis)
  • demo: convolution example

Solution (part 2)

  • also want to understand how outputs relate to other outputs
    • nature of "forward" question changes: not what x is sufficient for but what x is necessary for
    • turns out our more general notion of selection can help here too, because it supports negation
  • De Morgan dual
    • using negation (complement), can formulate "necessary for" in terms of "sufficient for"
    • think of this as inverting the direction of the GC (by turning meet-preserving into join-preserving and vice versa)
    • more generally: composing a GC with itself
  • demo: linking example

Closing slide

  • conclusion & future work
    • link to f.luid.org
    • future work: expression provenance

Done/dropped:

  • settle on slide solution (Keynote: no; Autodesk SketchBook: yes)
  • experiment with mic in bathroom and up close (bathroom: no; up close: yes)
  • play around with Keynote with iPad/Apple Pencil (outcome: shite)
  • experiment with recording demo (speedup x 4; can embed in Keynote, but not SketchBook)
  • check duration of talk: 20+5 min (see "Accepted submission" email of 28 Sep)
  • quick pass (unrecorded) to identify main remaining problems/omissions
  • define Galois connection using unit/counit formulation rather than isomorphism of hom sets
  • remove blue circle around Chiang Main urban area
  • drop "spine" explanation
  • pass over layers in Goal 2 solution slide
  • split De Morgan duals into two slides; use earlier colour scheme for GCs
  • reorder slides

Substantive revisions

  • cover letter
  • upload revision
  • line 355: please distinguish the metalanguage and object language phi -- elected not to do this; to be consistent, one must also distinguish metalanguage and object language integers, which would be cumbersome
  • line 246: no typing rule for k?
  • line 259: I found the overloading of --> in this judgment confusing in parts of the exposition.
  • line 473: I did not find the reference to representable functors helpful. Just say that comparison is defined pointwise and give an example.
  • L519,L599,L614,L658,...: I was confused by Sel_(ρ,κ) A \times A; please parenthesise
  • L800: If the type direction is by convention given by the lower adjoint, then please write $Y\rightarrow X$ instead of $X\rightarrow Y$.
  • In general (from Section 3.4 to Section 4.1), it's confusing to use the letters $f$ and $g$ both for the halves of a Galois connection and for a Galois connection; please try to use different letters instead.
  • L1028: Say more explicitly how your analysis computes better dependency information.
  • line 422: are there any complications to supporting higher-order data?
  • The paper notes that "0 * x" does not depend on "x". Does "x * 0" depend on "x"
    or is there a way you can get the best of both cases?
  • line 151: what is meant by "prefixes"?
  • line 222: what is a "partial" value?
  • Figure 13,14: Yellow is hard to see in black-and-white; please choose a darker color.
  • line 457: clarify what the elements in the tuple represent (which was not explained earlier). One can figure it out, but better not to have to
  • L177: It would be useful to call out why eliminators are used by this language
  • squeeze back into 25 pages
  • more details on how Figure 1 is generated: concrete description of visualization set and source set
  • clearer on relationship between preorder and partial order
  • At about section 2.2.4, I started getting lost and would have found it useful to have some examples
  • build on 2.2.4 to show forward-matching a list, with hole expansion
  • At about section 3.1, I have another note that I was starting to get lost
  • (active) "argument availability/demand" instead of "ambient availability/demand"
  • split out pattern-matching rules from Figs. 9 and 10?
  • add missing Boolean cases?
  • check our response for additional commitments
  • link to tag v0.4.2 in Fluid repo
  • mention dependencies on higher-order data relevant to expression provenance
  • check pagination

Define slicing functions directly

Replace the current inductive/relational definitions of forward and backward slicing, and subsequent definition of Galois connections by domain-restriction, by direct recursive definitions of the Galois connections. This should make the relationship between the definitions and the metatheory clearer, and free up some space. It will require reworking all the proofs.

All forward-slicing functions for desugaring will need to be given separately from the desugaring relation – this will add about 1.25 pages of new definitions, but there should still be a net saving in space.

  • First pass over:
    • eval – consolidate prim-unsat and prim-sat
    • eval fwd
    • eval bwd
    • match fwd/bwd
    • desugar
      • desugar relation without slicing
      • desugar fwd
      • desugar bwd
    • totalise
      • totalise relation without slicing, witnessed by \vec{π}
      • totalise fwd, with \vec{π} index
      • totalise bwd
    • singletonElim (for patterns, pattern sequences and list rest patterns)
      • singletonElim bwd
      • singletonElim fwd
      • singletonElim relation without slicing
  • rec◃ and rec▹
  • env lookup
    • lookup▹ and lookup◃
    • lookup GC

Reinstate nested patterns

  • syntax of nested eliminators/matches
  • pattern-matching
  • fwd-slicing rules
  • bwd-slicing rules
  • join for expressions

Mutual recursion and generalised let

Reinstate rules for mutual recursion from previous draft. Also consider generalised let, to allow expressions like let (x, y) = e in e', which require the notion of a “singleton” eliminator. Put these in the appendix.

  • appendix:
    • mutual recursion
      • syntax and typing rules
      • “close defs” operation, with fwd and bwd slicing
      • revised app rule, with fwd and bwd slicing
    • non-recursive functions
      • syntax and eval rules
      • fwd and bwd slicing rules
    • let f σ – actually no advantage in this form over letrec f σ
    • generalised let
      • singleton eliminators
      • syntax, semantics and slicing rules for generalised let
  • main body:
    • replace recursive function literals by let rec
    • non-recursive function literals (lambdas) – move to appendix
    • revert to zipper-based “match” traces; drop “partial” eliminators from core

First pass over core language

  • Say what the goal of this section is. Make it clear how it leads on from section 1 and (will) lead into section -
  • Types. Why do we need records, lists and vectors?
  • single figure syntax
  • notation for meta-level vectors
  • term forms
  • move eliminators later in syntax figure
  • rather than one elimination-form per type, we provide a single pattern-matching elimination form called an eliminator
  • eliminators – explain why we have these rather than case expressions
    • real-world functional languages have rich pattern-matching features such as piecewise definitions, view patterns, pattern synonyms, etc with non-trivial exhaustiveness-checking rules.
    • we want an efficient, simple core pattern-matching feature that is total by construction and can serve as an elaboration target for more advanced features.
    • in section 4 we show how to desugar piecewise definitions that satisfy certain well-formedness conditions into eliminators.
    • related to case trees and similar intermediate constructs used when compiling functional languages with pattern-matching
    • formally, eliminators extend Hinze’s “generalised tries” with variable binding.
  • typing rules: expressions and values won’t need much explaining, apart from cases that involve eliminators
  • eliminator typing judgement
  • syntax of values and environments
  • pattern-matching
  • syntax of matches and traces to move to next section (evaluation)
  • evaluation (and traces/proofs thereof)
  • comment vectors out of core language for now
  • evaluation, auxiliary relations and pattern-matching will all fit in one figure
  • two-column syntax figure
  • some kind of horizontal shading at the top of each relation
  • move values and environments to core syntax figure
  • move values and environment to core typing figure
  • auxiliary evaluation relations
  • two-column figure for traces
  • drop partially applied primitives – leave for now, non-trivial change
  • zip/unzip pattern in vector notation

Anonymised paper build omitting appendices

Build script and macros to build submission version:

  • collate-examples.sh to run optionally (but always on GitHub), snapshot of examples in repo
  • macro for proof references difficult – see note below
  • two orthogonal conditionals: anonymous and appendices (although we only need 3 of the 4 cases)
  • set anonymous whenever @ACM@anonymous is set
  • macro for language name
  • options for build.sh, with default
  • GitHub Action to build all configurations
  • submission version to enable review option (for line numbers)

Final tweaks/considerations

  • section 3: better notation for forward/backward environment lookup
  • It would be nicer to have countries as columns (see below)
  • remove vectors from core language?
  • remove partially applied primitives?
  • drop mutual recursion in favour of single recursive function
  • round-tripping figures at end of section 3 (related to section 4)

Countries as columns:

| Year | Type | USA | China | Germany |
| — | — | — | — | — |
| 2015 | Bio | 12 | 34 | 56 |
| 2015 | Hydro | 78 | 91 | 11 |
| 2015 | Solar | 12| 13 | 14 |

Then we can have just a single table and two charts. In one chart, we use RED and in the other BLUE. We then highlight some cells in RED and some others in BLUE.

Draft 2 for PLUG

  • convolution demo
    • record
    • speed up and export
    • example with different boundary method
  • bar chart/line chart demo
    • record
    • speed up and export
  • miscellaneous
    • trial Zoom recording with .mov playback
    • trial playback of recorded talk over Google Meet
    • explain matrix convolution
    • explanations for convolution boundary methods
    • change Fluid text on website to “data-linked visualisations”
    • (recorded) dry run to identify missing bits of exposition (45 min)
  • “Making sense of visual outputs”
    • better summary: can we use PL, data provenance, dynamic analysis to help here?
    • make Chiang Mai chart much larger
    • better title: “provenance-aware” data visualisation?
    • legends are only comments, they have no semantic meaning
  • “How can we improve on this picture?”
    • move to closing slide; add layer that shows input/output relationship
    • shift from “run once” execution to living relationship between input and output
  • “Goal 1: Fine-grained linking of outputs to inputs”
    • mention our own programming language
    • explain round-trip (sufficiency) in terms of reproducibility/trust
    • output no longer oblivious to how it was computed from input
  • “Goal 2: Linking outputs to other outputs”
    • unselected line chart on right?
    • include legends on both charts (but point out that year is not mentioned on left)
    • what is the nature of the relationship on the right? is the same as the one on the left?
    • no: “needs” is wrong direction, and “suffices for” has wrong “modality” (USA data insufficient)
  • “Prior work: Galois slicing 1”
    • present as an example of dynamic program analysis with the desired round-tripping properties
    • explain what example program does
    • by “input” I mean to include program as well
    • new code layer without blue shading
    • capture input/output we care about by erasing irrelevant subtrees; add layer to illustrate
    • allude to role of trace
    • give Galois connection intuition on this slide (monotonicity; increasing/decreasing)
  • “Prior work: Galois slicing 2”
    • perhaps merge with round-tripping part of previous slide (use needs/suffices for as f and g?)
    • significance of meet/join preservation
    • talk about ordering relation on slices
  • “Prior work: Galois slicing 3”
    • for example, I can’t easily select 0.6, but only the expression that surrounds 0.6
    • use brown as the erasure colour
    • use summary of two steps to simplify explanation
    • new title: “Differential” slicing
  • “Prior work: Galois slicing 4”
    • merge into previous slide
    • 2 and 3 not in differential slice because they appear in both slices
    • again the “modality” isn’t the right one
  • “Galois dependencies”
    • Pos(e) -> Paths(e)?
  • “Representing selections”
    • drop hole
    • simplify text: just α in A represents selection state (whether that path is included)
    • useful to allow A to be an arbitrary lattice but most concretely this can just be Bool
    • erase -> shape?
    • mention no annotations on operators or closures
  • “Forward and backward dependency analysis”
    • give flavour of analyses for primitives, structured data?
  • “Demo 1: Matrix convolution”
    • highlight inputs (image, filter)
  • “Goal 2: Linking outputs to other outputs”
    • “sufficient for” in purple?
    • “negate” in neither green nor purple?
  • “De Morgan Duals 2”
    • replace α, β by needs, suffices for?
    • Galois connections compose component-wise
    • the composite GC also has a dual which goes from V2 to V1
    • tell us how V1 and V2 compete for resources
  • “Demo 2: Brushing and linking”
    • ad hoc support for this in existing viz libraries; automation can be ubiquitous and robust
    • also “data transparent”
  • miscellaneous
    • record demos again, omitting unnecessary mouse movements
      • convolution.mov
      • linking.mov
        • on LHS need to select Germany as well
    • blank layer at the top of each slide, for sketching on; reset layer states
    • fresh practice copy of slides

Initial draft

Initial draft prior to meeting with Benjamin on 30 May.

  • Vis 2019 LaTeX template
  • edit template into document skeleton
  • import hand-drawn sketches from Tomas
  • intro blurb, adapted from TPS proposal
  • rough pass over abstract
  • rough pass over intro
  • flesh out and send to Benjamin at end of Weds

Comments on Section 1&2

I read through the first two sections and find them quite good. There are a few comments which I detail below. But the main concern is probably that it is 10-pages long.

  • line 2: The first sentence is quite hard to penetrate.

  • line 81: I wonder whether program analysis researchers will agree that we advanced the state of the art in that area.

  • line 82: It might be a good idea to hint at the solution. I didn't see any mention of trace for example.

  • line 101: This is already marked for revisiting. Just want to point out that the mention of "selection" is out of place.

  • line 106: I think presenting the type system is a good idea. But probably a bit more justification can be given. And mention why no polymorphism.

  • line 111: "so with omit" Something is wrong here.

  • line 111: I find the notation x: A unnecessarily hard to read. Also, it is worth clarifying that the binding may be to either types or terms.

  • line 120: The explanation of eliminators is good. Is it mentioned anywhere why eliminators are used instead of more common cases?

  • line 163: It might be useful to motivate the use of traces.

  • line 176: It seems that there is some mixup of Figures 5 and 6.

  • line 188: "defined later" -> "explained later"

  • Fig 5: Rule Var and Project seems to have the premises on the side instead of top.

Draft 1

  • fit to 25 pages
  • sanity-check appendices in anonymised-full.pdf
  • non-anonymised version to link to GitHub repo
  • upload
    • anonymised.pdf
    • anonymised-full.pdf
    • nonanonymised-full.pdf (rename target)

Other subtasks:

Anonymised supplementary material (available to reviewers along with paper):

Provisional section lengths (25 pages total, 6 sections, so ~4 pages per section on average):

section target length? current length
1. intro 4 pages 3.5
2. core calculus 5 pages 5
3. dependency analysis 8 pages 8.5
4. evaluation/toolkit 4 pages 4
5. surface language 3 pages 4
6. related work/conclusion 1 page 0
TOTAL 25 pages 25

Piecewise definitions

The implementation allows piecewise function definitions by pattern-matching, albeit without guards. These are checked for well-formedness (mergability) and translated into eliminators, at parse-time. As we integrate desugaring into the implementation, we should migrate this process into the desugaring phase, and retroactively formalise the process that turns a (well-formed) sequence of equations with patterns on the LHS into an eliminator.

@min-nguyen Here’s a breakdown of this task. Once this is formalised, it should be relatively straightforward to adjust the implementation to match the formalisation. The main thing to change will be that in the implementation of piecewise definitions, the parser currently use a “nested” (eliminator-like) notion of pattern, that also combines a continuation. In the formalisation of piecewise definitions we will reuse the (more conventional) notion of pattern defined for the surface language, with a separate continuation. Once we’ve changed the current implementation to work like this, we’ll be in a good position to migrate over to the surface language.

  • Extend the syntax of raw surface terms with a let f \vec{c} in s construct for defining (recursive) functions. We can omit the rec keyword and just assume all functions are recursive. Metavariable c ranges over the clauses which define f. In the concrete syntax, we expect each clause to start with f. But defining the abstract syntax as let \vec{f c} in s would be a bit redundant.
  • A clause is a non-empty sequence of equations of the form \vec{p} = s. I suggest making the non-emptiness explicit in the definition.
  • Give a typing judgement for clauses, using the typing rule for patterns and the typing rule for (surface) terms.
  • Give the additional typing rule for let f \vec{c} in s. Look at how recursive functions are dealt with in the core language. The main extra consideration here is that you will need to type-check each clause; this can be done using a universally-quantified premise of the form ∀i ∈ dom{\vec{c}}. ... (remember that a vector can be thought of as an indexed family or partial function, like we were discussing the other day; dom will retrieve the index set/domain).
  • Give the desugaring rule for let f \vec{c} in s, which will defer to an auxiliary function merge \vec{c} = σ.
  • Defining merge is probably best done in two stages. First, define a function elim p κ = σ which turns pattern p and continuation κ into an eliminator, defined mutually recursively with a function elim o κ = σ which does the analogous thing with a list pattern “rest”. An important detail is how elim represents the partiality of the resulting eliminator. Until now we have had eliminators with missing branches. For various reasons, I think it will be easier if we require all branches to be present (e.g. [] and (:)) but allow the continuation to be a hole (□). In particular this will allow the merge of partial eliminators via the join () operation given in Fig. 12.
  • Second, define merge so that (extensionally speaking) it first maps elim to the list of clauses, promoting each into an eliminator, and then folding over the result. Both of these operations are partial, although in the formalism we can assume the clauses are well-typed, which eliminates the most egregious problems.
  • The operator for eliminators is mostly sufficient as it is, except that we should define an overload for expressions, where e ∨ e’ is defined (and equal to e) only when e = e’. In other words, by the time the merge operation gets down to the expressions at the end of each clause, they can’t be merged unless they are equivalent. (This is actually more general than any practical language would permit, since it allows “duplicate” clauses of a definition as long as they agree syntactically. But it’s harmless and makes the formalism a bit easier.) We should also define e ∨ □ to be e and σ ∨ □ to be σ.
  • It should be straightforward now to add back match..as back into the surface language, using the same machinery. The syntax should be match s as \vec{c}; the typing and desugaring rules should be easy. In the concrete syntax implemented by the parser, we allow -> instead of = in the clauses, but we needn’t concern ourselves with this difference in the abstract syntax.

First pass over dependency analysis

  • role of “argument availability” parameter α
  • idea that every function execution introduces an “argument availability” context
  • why we need traces in fwd direction (for hole-expansion)
  • Explain that we mean “first-order” data (not closures or primitive operations).
  • Forward/backward dependency analysis:
    • explain fwd/bwd pattern-matching
    • explain fwd/bwd evaluation
      • auxiliary relations
      • Galois connections
  • tidy up slicing files after recent reorg
  • cleaup Galois connections for:
    • eval
    • match
    • env lookup
    • recursive definitions
  • roll out \raw macro

Done/dropped:

  • What is the primary purpose of the calculus? To track data dependencies.
  • Core approach: treat data type of terms as a functor so we can attach usage information to certain subterms. Maybe make that explicit, e.g. Expr a where a is the type of the annotations.
  • We will constrain a to be a lattice. Say what a lattice is and why it might be relevant to the idea of selection introduced in section 1.
  • Point out that we can recover (something isomorphic to) “normal” syntax by instantiating Expr with unit (trivial one-point lattice). Expr Bool (Bool as 2-point lattice with the usual Boolean connectives) gives a basic notion of selection. But we can also imagine computing 2 selections simultaneously (as Expr (Bool × Bool)). (It may be premature to go into too much depth here, but we will at least need an example of Expr Bool and how it can represent a data selection.)
  • We might need to switch our use of tt and ff in the semantics to ⊤/⊥ (if we want to remain abstract in an arbitrary lattice).
  • We also have annotations on (some) terms forms – those that relate to data. This allows us to trace back to the source code responsible (we will revisit this in section where we consider a richer surface language).
  • (@rolyp) Move to functional definitions of fwd/bwd slicing
  • Forward/backward dependency analysis:
    • New notation for “unannotated” terms
    • define “selections” w.r.t. a fixed unannotated term, explain how this forms a lattices too

Suspected typo in Figure 9

  • The forward "record" and "project" rules seem to be given the wrong names.
  • Figure 5 appears after figure 6.

Final version

Minor corrections.

  • #112
  • acknowledgements
  • L76 "to specified" -> "to be specified"
  • L78 "to do allow" -> "to allow"
  • L512 "turing" -> "turning"
  • L775 "using using" -> "using"
  • L776 "other that" -> "other than"
  • L806: What about the ambient availability?
  • L818 "form" -> "and for every input $\overrightarrow{n}$ form"
  • line 75: require data transformations to be specified
  • line 77: what we would like is to allow
  • line 79: artefact as a baked-in
  • line 178: our implementation is untyped
  • line 180: structured data which are
  • line 234: with --> being part of the notation
  • line 240: allows us to transform an eliminator ... into an eliminator of
  • line 370: varepsilon in the unit rule should be ()
  • line 453: The figure is a preorder on values, not a partial order.
  • line 457: please clarify the presentation of the two-point lattice, including which elements are in the set (rather than reusing "2")
  • line 494: Figure 8 is a preorder
  • line 510: In practice
  • line 512: evaluation, turning input
  • line 544: for the var case, perhaps use \top instead of tt, to be more generic
  • line 555&556: in cons, I think kappas on the right should be kappa-prime
  • line 903: again the name "2" is used for both the algebra and the carrier set. Just enumerate the elements of the carrier.
  • L83: "As well as providing"
  • line 1174: Related to this is work on Resugaring by Pombrio et al.
  • L178: "implementation is untyped"
  • L179: "types A and B**,** which"
  • L179: "e : e'" has space before ":" but "x: e" does not. This seems inconsistent.
  • L221: There is an extra space before both citations on this line.
  • L282: There is an extra space before the citation on this line.
  • L325: "figure**,** the"
  • L490: "becomes selection" -> "become selections"
  • L415: ", which are"

Slicing rules for first-class primitives

The slicing rules for first-class binary operations, which include partially applied operations, are non-obvious enough to need writing down. In the old formalism only unary operations were first-class; for the present paper we might want both traditional operation syntax and first-class operations.

  • generalise + to arbitrary binary operation
  • expression form for binary operations, with evaluation and slicing rules
  • generalise primitive values to include unary ops; binary primitives are curried
  • app rule to cover (potentially partially applied) binary operations in the function position
  • fwd slicing rule
  • bwd slicing rule(s)
  • k to range over variables and operator names?

Eval/uneval Galois connection

  • remove traces from the codomain of eval in core calculus
  • ditto implementation language
  • sync with new syntax for definitions
  • use function-space arrow for trie types; drop notion of “continuation type”
  • typing of continuations should defer to either expression typing or trie typing
  • revisit arrow direction for uneval, unmatch?
  • defs should use annotations on variables, as per impl
  • perhaps separately show how to interleave traces with values?
  • prove matchFwd and matchBwd form a Galois connection
    • define matchFwd and matchBwd
    • fwd after bwd
    • bwd after bwd
  • revisit non-determinism introduced by ⊒ notation – might need a new relation
  • determinism lemmas (up to hole equivalence)
  • prove eval and uneval form Galois connection
    • make the trace an index on the relation
    • define evalFwd and evalBwd (for each T)
    • formalise primitives broadly as implemented (#76)
    • switch to 1-D arrays (#77)
    • fwd after bwd
    • bwd after fwd
  • state (prove?) required monotonicity lemmas – deal with this later
  • “closeDefs” GC
    • state theorem
    • prove - #84

25 May - Notes

  • Characterize what is this "understanding" of program that we're aiming for - a general thing of which visualization is one example (it would be nice if abstract started from this general thing)

First pass over surface language

Explain/motivate the surface language and relevance to the overall problem. Some definitions will have to move to the appendix.

Code/example todos:

  • example that shows constructed values consuming constructed values (e.g. tree nodes needing list nodes)
  • verify annotations on values prettyprinted correctly
    • consolidate prettyprinting special cases on pairs and lists
    • prettyprint lists as nil, cons nodes
    • add highlightIf logic for constructors

Done/dropped:

  • move core definitions to the appendix
  • two-column format for syntax fig
  • rules for forward/backward sugaring (#41)
  • example figure with source code highlighting
  • Core language serves as a useful "core" language but lacks realistic features. Motivate need for a surface language which can support examples given earlier but still with the ability to track dependencies back to source code
  • New example containing list comprehensions (w/ generators), clauses, totalise, etc
  • Afterwards, explain that to support the requirement of tracking constructor dependencies back to source code, we need not only the typical desugaring process of general languages, but an extra bidirectional stage on top of the bidirectional analysis of the core language
  • In the forwards direction, it must specify how annotations on surface language expressions can be correspondingly positioned on the core language expressions that they desugar to.
  • Explain the desugaring of our example in chronological order, making reference to and elaborating on the necessary forward desugaring rules when necessary for terms, clauses, and totalise (Figure ??).
  • Introduce the syntactic constructs required in the surface language (Figure 15).
  • Introduce the "raw" vs. selectable notion (and conventions) from Section 3, for surface terms.
  • In the backwards direction, it must use the original surface-language expression as a trace t in order to reconstruct the original surface-level program, and specify how annotations on the core language propagate backwards to form annotations on the surface language
  • Explain the backwards desugaring similarly -- perhaps explain each rule of interest by giving the backward and forward cases together?
  • State that the relevant functions form Galois connections, and fwd reference the theorems in the appendix (see section 3 again for how to do this).
  • Mention but probably don't need to explain typing rules.
  • explain why we can get away without a separate (non-slicing) definition of desugaring (can we?)
  • In the explanatory text, give the signature of the forward and backward slicing functions, similar to what we did in section 3 for eval, pattern-matching, env lookup, etc. Use the same notational conventions.
  • future work: defining a "sugaring" embedding for values into the surface language (for intermediate values), and also a semantics which (modularly) interleaves desugaring with evaluation
  • Maybe comment on piecewise definitions, which help motivate eliminators. For clauses we will need the notion of a "partial" eliminator.

Formalise primitives broadly as implemented

The current formalisation of primitives doesn’t allow for operations that don’t depend uniformly on all their arguments (e.g. operations with annihilators). Also, the current formalisation doesn’t easily accommodate desugaring of binary application into nested application of first-class primitives, because it requires binary operators to be annotated (which they currently aren’t). This second problem is related to the first, because we use the annotation on the operator to accumulate the (conjunction of) the annotations on the arguments. We should formalise primitives properly, albeit without all the complexity of the implementation (e.g. overloading, and operations defined on structured values).

  • move binary application to surface language
  • introduce partially applied primitives into the value syntax
  • primitives should accumulate args until application is saturated, as per impl
  • require each primitive to specify a fwd and bwd slicing operation
  • introduce a “primitive application” helper so we don’t need two primitive rules per judgement?

Reviewer response

Due by midnight Wed 8 Sep AoE = 1pm 9 Sep BST (check).

  • fix f.luid.org/new build
  • add explanatory text and ensure reachable from f.luid.org
  • import reviews into repo
  • first pass over response, quoting relevant review comments
  • second (condensed) pass, replacing quoted comments by topic summaries
  • third pass based on Meng's feedback
  • finalise and submit

Tidy-up pass over theorems/proofs

Consolidation pass before next Thursday’s meeting.

  • separate clause lemma for list-rest patterns
  • check each proof for consistency with \eq handling in associated definitions
  • each proof to reference the theorem (part) it proves
  • each theorem to forward reference the relevant proof (but see #104)
  • restate the theorem before the proof? (see objects-as-automata for how to do this)
  • reduce space around \intertext{\crossrule}?
  • standardise on use of parentheses and capitalisation in rightmost column
  • upload new version to arXiv

Switch to 1-dimensional arrays

The current matrix formalisation is less general and more complex than it could be – 1-D vectors would probably be better.

First pass over introduction

Plan of introduction/overview

Introduction

Identifying the problem

  • visualisations central to communicating science and public policy, but easy to misuse/misinterpret
  • can improve this by making them more explorable – usually done through brushing & linking (and lots of viz frameworks support this: altair, vincent, bqplot, shiny, plotly, Glue)
  • but there are various problems with this – automated linking usually only possible for visualisations provided by the library, or where the user specifies explicitly how things are to be linked
  • can we solve this problem using PLT, so that programmer just writes purely functional visualisation code, and the infrastructure does the linking? this would make linking pervasive, automatic, and mathematically robust

Towards a solution: Galois slicing for linking visualisations

  • the above requirements suggest that dependency analysis may be able to help
  • Galois slicing in particular is a technique which lends itself to the bidirectional aspect of the problem
  • namely: when two charts are “cognate” (depend on common data), we can link them by slicing backwards from one chart to identify the needed data, and then forward to see the parts of the other chart that depend on that data
Problems with existing approaches (1)
  • however, prior work on Galois slicing considers a slice to be a “partial term” or context (term with holes) – which isn’t a suitable notion of selection – one cannot formulation the question “what is needed for Y?” where Y focuses on an arbitrary part of the output
  • prior work does propose differential slicing as a notion of selection, but this underapproximates the required dependency information
    • idea of differential slicing is to “focus” on a subtree v, compare the slice with respect to the “spine” which picks out v, and the slice with respect to that same spine with v included
    • if any of the input needed for v is also needed to compute the spine, this won’t show up in the differential slice (simple example with let/pair)
  • to avoid this problem we will formalise the notion of selecting elements of interest directly in the Galois framework
Problems with existing approaches (2)
  • we also need to be able to ask a different but related kind of question: “what depends on X?”

Overview/motiving example(s)

  • we will break down our problem of linking cognate visualisations into two subproblems: (a) what data does some part of V1 depend on, and (b) what parts of V2 depend on that data?
  • problem (a) can be illustrated by considering a familiar example: matrix convolution
    • when we select an output cell, we can see what input cells and kernel cells were needed
    • there is an implicit dual to this question: what parts of V2 depend only on some data? i.e., what parts of V2 is that data sufficient for?
    • these questions (“what resources are needed for Y?” and “what can we do given resources X?”) are dual (adjoint)
  • problem (b) is distinct from problem (a) but also from its dual, and can also be illustrated using matrix convolution
    • when we select an input cell X, we can see what output cells needed it (in general X may not be sufficient for any cells, but there may be many cells which need it)
    • we can formulate this problem by flipping the polarity of (negating) the input, so that we have all resources apart from X; anything we can no longer compute must depend on X
    • in turn, the negation of this (the output) is exactly the things that depend on X

Some to dos:

  • (@rolyp) abstract
    • alternative to “slicing” terminology (e.g. Galois dataflow, Galois data dependency, Galois provenance..)
  • (@tpetricek, @rolyp) initial plan for intro/overview (see notes in comments below)
    • example: matrix convolution (some nice lecture notes on convolutions and kernels)
    • example: brushing-and-linking

Projectional argument slicing

It should be possible to project away unused arguments, analgously to how we project away unused parts of tuples. There may be a difficulty projecting away the last argument to a function and preserving “application-hood”; might be possible to deal with this in the definition of projection, by specifying that unneeded final arguments project to the unit value, rather than being erased.

Auxiliary lemmas for desugaring Galois connection

Prove the auxiliary lemmas at the beginning of the appendix. @min-nguyen We’ll pick these up in the first week of May.

  • (Roly) tweaks to totalise and 4th auxiliary lemma (see below)
  • list rest
  • clause
  • (Roly) clauses
    • restore generality to clause-bwd (so it takes an arbitrary eliminator, not just a singleton)
    • clause proof should explain that σ is a singleton and therefore any σ’ below σ is too
    • syntax of partial eliminators
    • “disjoint join” of partial eliminators (undefined for expressions)
    • express clause desugaring (fwd and bwd) in terms of “disjoint join” (observe bwd still deterministic)
    • p should be vec{p} in clause lemma
    • write \vec{c} \neq \seqEmpty in preference to \length{\vec{c}} \numgt 0
    • fwd/bwd proofs for clauses
  • totalise

First pass over negation section

  • 4-way figure to illustrate needs vs. neededBy and their adjoints
    • reinstate ability to configure selection colour
    • output on right, inputs on left
    • switch to emboss (because it's normalised and has zeros in the filter) and zero
    • "needed by" example to select upper middle instead of upper left cell in filter
    • upper adjoint of "needs": example that selects strictly more on round-trip
    • lower adjoint of "needed by" should select upper row of filter (?)
    • edit into figure for paper
      • 2 subfigures, one for each pair of relations?
      • 2 code subfigures (convolve library code + usage example) underneath?
  • revisit/flesh out intuitions in section 1
  • flesh out GC discussion at end of section 3
  • selection states as elements of a Boolean lattice (with negation) push this back to section 3?
  • de Morgan dual of fwd/bwd eval
  • some sort of schematic to show 4 functions and their relationships
  • relationship to Galois slicing

Source code for convolution example

Script to download .fld source files from the Fluid repo and put them somewhere the LaTeX build expects to find them.

  • GitHub Actions workflow to build paper:
    • switch from Makefile to build.sh
    • status badge in README.md
    • upload PDF somewhere
      • upload-artfact (uploads a zip) – that’ll do for now
      • try softprops/action-gh-release (create release for every tag) ❌ requires tag on every push
      • try rymndhng/release-on-push-action (creates tag for every push) ❌ don’t want source zips
      • publish via GitHub Pages – no, would be public
    • only run on release branch
  • pull down fluid/lib/convolution.fld into new fluid subfolder (and add to .gitignore)
  • listings file for convolution.fld
    • import using lstinputlisting
    • fix white lines on grey background (either drop background or use mdframed to put grey layer)
    • keyword formatting/style
    • tweak formatting and variable names
    • math symbols for <-, >= and <= (since we can’t use Fira Code without XeTeX or similar)
  • conv_extend example
    • parameterise script for convolution.fld
    • import using lstinputlisting
    • tweak formatting and variable names
  • extract relevant steps from build.sh to separate script

PLUG practice talk

  • Dry run
    • have both .mov files open full-screen and rewound to beginning
    • reset layer states on all slides
    • record
  • Presentation
    • less is more when explaining
    • remove 2015 highlight from Goal 1
    • "brushing and linking" on Goal 2
    • reorder layers on Goal 2
    • clarify Galois slicing has the needs/sufficesFor property
    • define two analyses over a trace of e => v
    • two views of the same data
    • new copy of "raw" slides
  • Bits to rehearse
    • Galois slicing slide 1
  • Topics I may need answers to or to be able to expand on
    • data dependency can be on constants in the program too
    • other data may be deemed irrelevant because it is only involved in the "why", not the "what"
    • different kinds of question one might ask; this approach allows them to be asked independently
    • “general case” (composing a GC with its dual)
    • what are the arguments to method in the convolution example?

PLUG prep:

  • have both .mov files open full-screen and rewound to beginning
  • reset layer states on all slides
  • charge Apple Pencil and iPad
  • mic on and nearby
  • brown as highlight colour

Records

It’s hard to do data science examples without records. The following should generalise the current pair rules, which will now be subsumed by a desugaring into records:

  • record types
  • record eliminators – make sure singleton record eliminator distinguishable from variable eliminator
  • record constructors { \vec{x: e} } and corresponding value form
  • match and trace forms
  • typing rules for values, expressions and eliminators
  • pattern-matching and evaluation rules
  • leq and join definitions
  • forward and backward slicing for pattern-matching
  • forward and backward slicing for evaluation

The main technical questions are:

  1. Do record eliminators “nest” in the way that pair eliminators currently do. Probably yes, but that might look a bit weird: a record eliminator will be a “partial matcher” for the first field only. Experiment with this first.
  2. W.r.t. to annotations (and construction/deconstruction), should we treat records as heterogeneous lists? Or should there be a single annotation for the entire record?
  3. In our examples, pairs are datatypes, not record types – need to rethink the desugaring idea

Desugaring:

  • surface form for pairs (e, e’) and typing rule
  • retain pair patterns, but pair pattern typing now yields a record type
  • record patterns, plus typing rule
  • disjoint join (record case to generalise pair case)
  • desugar fwd and bwd to { fst: e, snd: e’ }
  • clauses desugaring: pair pattern to record eliminator, plus record pattern rule (fwd and bwd)
  • totalise: pair case to use record eliminator, plus record pattern rule (fwd and bwd)
  • “empty record” macro
  • experiment with parentheses around all records?

Projection:

  • record projection expression, typing and trace form
  • evaluation, fwd and bwd rules

We may end up defining record operations that won’t be typable in the naive type system we give in the paper. But most of our prelude isn’t typable in that type system either, because the type system lacks polymorphism.

Camera-ready copy (final version)

  • please omit all packages/commands that interfere with default length, e.g. \raggedbottom
  • second-level and third-level headings: headline-style capitalization should be used
  • p. 16: undefined figure reference
  • caption missing for figure on p. 17
  • References: Several DOIs are missing
  • author-year-style citations should be used, not numeric citations
  • p. 27 should be filled with references, there is no need to start a new page for this section
  • Fig. 2 should not reach into the right margin outside the text area
  • please consider numeric order for position of Figs. 12 and 13 (i.e. Fig. 12 above Fig. 13)
  • p. 1 seems too short, copyright information and 'Authors’ addresses' are not in correct position

Please take the following small issues into your consideration
(cf. instructions page).
https://www.conference-publishing.com/Instructions.php?Event=POPL22MAIN&Paper=9a290741459e0754e32a4c254aa860d757e4a05d

  • Please make sure to use the default length of all pages.
    Page 1 seems to be too short, the copyright information and the 'Authors’ addresses' are not in the correct position.
    Please omit all packages and commands that interfere with the default length.
    \raggedbottom looks suspicious.

  • Second-level and third-level headings: Headline-style capitalization should be used.
    Capitalize:

    • first and last word, first word after a colon
    • all major words (nouns, pronouns, verbs, adjectives, adverbs)
      Lowercase:
    • articles (the, a, an)
    • prepositions (regardless of length)
    • conjunctions (and, but, for, or, nor)
    • to, as
      Hyphenated Compounds:
    • always capitalize first element
    • lowercase second element for articles, prepositions, conjunctions
      and if the first element is a prefix or combining form that could not stand by itself
  • References: Author-year-style citations should be used in this ACM style, not numeric citations.

  • Page 27 should be filled with references, there is no need to start a new page for this section.

  • References: Several DOIs are missing. Please consider adding them.

  • On page 16, there is an undefined figure reference in the text.
    The caption is missing for the figure on page 17.

  • Please consider a numeric order for the position of figures 12 and 13
    (i.e. Fig.12 above Fig.13).

  • Figure 2 should not reach into the right margin outside the text area.

Please make the proposed changes and re-submit your paper if possible within three workdays.
(Please use the submission link from the author-kit e-mail, do not send by e-mail.)
Submission link:
https://www.conference-publishing.com/submission.php?Event=POPL22MAIN&Paper=9a290741459e0754e32a4c254aa860d757e4a05d

Desugaring Galois connection

Show how the fwd and bwd desugaring slicing rules form a Galois connection (for a given program). Tasks:

Done/dropped:

  • sanity-check “ambient computation” in desugarBwd – needs further thought, but basically ok
  • helper macros for things like totaliseGeq
  • consolidate list-comp-gen proofs using new list-gen lemma
  • roll out macros to proofs
  • purge superfluous x-refs
  • simplify list comprehension proofs to remove done and list-comp-last
  • macroise desugaring overloads
  • check list-comp-gen bwd – can this be made more symmetric with fwd rule?
  • define functions desugarFwd_s, desugarBwd_s in terms of the corresponding relations, similar to eval
  • theorem stating that the two functions form a Galois connection
  • drop done sentinel in list comprehension syntax and allow [s | ε] instead
  • ensure eq rather than geq used for (meta-) “pattern-matching”
  • trace analogues as subscripts in backward desugaring relations
  • new symbol for desugaring relations that have expression or continuation on the left
  • define Galois connection notation f ⊣ g in terms of composites (f after g) and (g after f) – see def:core-language:gc
  • define \Below{v} notation using
  • prove “part 1” (fwd after bwd is increasing)
  • prove “part 2” (bwd after fwd is increasing)
  • identify any other theorems we need as go along, e.g. monotonicity
  • state GC theorems for:
    • desugar for list rest
    • totalise
    • desugar for clauses
    • desugar for single clause (judgements p, κ \desugar σ and \vec{p}, e \desugar σ)

Other useful points of reference:

  • “set of paths in a term” exposition in 3.2 of FPTETW
  • methodology summarised in 2.3 of IFPTETW

HPI presentation

Background slides to motivate/frame demo:

  • salvage what I can from Shonan slides
  • slide with 3 or 4 papers listing problems with data visualisations
  • research hypothesis: we need to make visualisations more transparent and explorable
  • relate to linking and brushing in D3, etc
  • example of whence/whither questions
  • Galois connection diagram from poster
  • revise "technical goals" slide to summarise main challenges
  • dry run 1

Hole rules

Holes are still need for the slicing algorithm to be tractable (else environment merging is exponential), even though they are equivalent to a term where all annotations are set to false (where no paths have been selected). It probably makes sense to add explicit rules for hole; they will overlap (in a “compatible” way) with the existing rules, and so are technically redundant, but will be helpful for implementation.

It turns out this isn’t straightforward (and in fact, nor is the implementation; it will need similar fixing). The issue is that forward-slicing of pattern matching, when the scrutinee is a hole, requires knowing the prefix of the value that was original matched (a “traced match”, or what we simply call a “match”). In turn this means forward slicing requires a trace. We can’t just assume that pattern-matching a hole produces a hole for the evaluation of the continuation (as with our previous approach); that isn’t true, because once control transfer to a different function context, we’re free to start constructing values again.

  • hole terms, hole values
  • match syntax should just be value with some subtrees replaced by variables
  • extra join equations
  • use w as metavariable for matches instead of ξ
  • forward slicing for pattern-matching now takes a “match”; new hole-propagating rules
  • sync backward slicing for pattern-matching; use holes rather than bot_κ
  • forward slicing rules for eval take a trace
  • drop bot notation for least slice
  • desugar/fwd rules
  • desugar/bwd rules
  • desugar-list-rest fwd and bwd rules
  • hole rules in separate figures
  • appendix A – revisit once we’ve finalised rest of formalism/impl

Other fixes unrelated to holes:

  • (experimental) family notation for join/meet of multiple annotations
  • drop “non-empty” side-conditions (implicit)
  • missing annotations on identifiers in applications
  • missing annotations on guard qualifiers (and fix syntax)
  • missing annotations on [] in guard qualifiers
  • list-comp-decl rule (inconsistently) included β (previously α_3) into the demand on the qualifier
  • totalise should take an annotation to attach to any constructed nodes
  • untotalise should return the (join of) the annotations on any constructed nodes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.