GithubHelp home page GithubHelp logo

grin-compiler / grin Goto Github PK

View Code? Open in Web Editor NEW
1.0K 39.0 36.0 22.33 MB

GRIN is a compiler back-end for lazy and strict functional languages with whole program optimization support.

Home Page: https://grin-compiler.github.io/

Haskell 86.61% Shell 0.04% Roff 0.06% Nix 0.53% C 0.76% LLVM 4.44% Assembly 4.36% TeX 0.87% C++ 2.34%
optimisation compiler haskell llvm data-flow-analysis functional-programming

grin's People

Contributors

anabra avatar anderssorby avatar andorp avatar bollu avatar csabahruska avatar exfalso avatar grinshpon avatar hanstolpo avatar hgsipiere avatar lightandlight avatar lsleary avatar nickwanninger avatar pdani avatar phile314 avatar tobiasgrosser avatar z-snails avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grin's Issues

SparseCaseOptimisation removes useful alternatives

On commit 2976a99 (current HEAD).

In src.grin:

grinMain =
  v <- pure (CA)
  print v
  v2t <- pure (Fother v)
  v2 <- eval v2t
  print v2

print x =
  case x of
    (CA) -> _prim_int_print 1
    (CB) -> _prim_int_print 2

other y = 
  case y of
    (CA) -> pure (CB)
    (CB) -> pure (CA)

eval thunk =
  case thunk of
    (Fother yt) ->
      other yt

Interpreting with stack exec -- grin src.grin --eval yields the output 12 (), but if I run the optimisations with stack exec -- grin src.grin, the resulting code is:

grinMain =
  v <- pure (CA)
  _prim_int_print $ 1
  pure v

Interpreting this code yields 1 (CA), which is different from the output before optimisations.


This appears to happen due to SparseCaseOptimisation.

Before, in 004.InlineEval.grin:

grinMain =
  v <- pure (CA)
  print $ v
  v2t <- pure (Fother v)
  v2 <- do
    thunk.0 <- pure v2t
    (Fother yt.0) <- pure thunk.0
    other $ yt.0
  print $ v2

print x =
  case x of
    (CA) ->
      _prim_int_print $ 1
    (CB) ->
      _prim_int_print $ 2

other y =
  case y of
    (CA) ->
      pure (CB)
    (CB) ->
      pure (CA)

After, in 005.SparseCaseOptimisation.grin:

grinMain =
  v <- pure (CA)
  print $ v
  v2t <- pure (Fother v)
  v2 <- do
    thunk.0 <- pure v2t
    (Fother yt.0) <- pure thunk.0
    other $ yt.0
  print $ v2

print x =
  case x of
    (CA) ->
      _prim_int_print $ 1

other y =
  case y of
    

InlineApply causes non-defined function when self recursive

When the function apply includes a call to itself, InlineApply inlines the apply function, and replaces the occurance of it with apply.0, however this function doesn't exist.

test.grin

 apply fnp argp =
    fn <- fetch fnp
    case fn of
        (P1MyFun) ->
            x <- store (CNull)
            apply argp fnp
        (P1OtherFun) ->
            pure (CNull)

grinMain =
    fn1 <- store (P1MyFun)
    arg1 <- store (P1OtherFun)
    apply fn1 arg1

grin --optimize test.grin

.output/002.InlineApply.grin



grinMain =
  fn1 <- store (P1MyFun)
  arg1 <- store (P1OtherFun)
  do
    fnp.0 <- pure fn1
    argp.0 <- pure arg1
    fn.0 <- fetch fnp.0
    case fn.0 of
      (P1MyFun) ->
        x.0 <- store (CNull)
        apply.0 $ argp.0 fnp.0 -- Problem is here
      (P1OtherFun) ->
        pure (CNull)

This is a handwritten example, but I encountered this problem when compiling a simple program with my idris2 grin backend, as inlining of other functions caused apply to be self recursive.

I can think of 2 solutions:

  1. don't inline apply if it's self recursive - this could reduce effectiveness of other optimisations
  2. keep the definition of apply around after inling.

I don't know the grin codebase very well, but I'll have a go at fixing this. I really don't know the codebase or recursion schemes, so I can't fix it (at least not without learning recursion schemes first).

Topics to write about

  • related projects using GRIN intermediate language or whole program compilation:
    JHC, LHC, UHC, HRC, MLton
  • future directions: ASAP, gibbon, MLton's IR, SPMD
  • frontend priorities: GHC, Idris, (Agda?)
  • project status
  • project priorities:
    1. analyze code
    2. compile code
    3. minimal tooling (debugger/profiler)
    4. compile fast code
    5. compile fast code quickly
    6. extensive tooling
  • possibilities of reusing llvm and existing tooling from imperative world

Windows support?

Can I build this on Windows? Related: #54 (I wish I can have a Windows binary)

Issues / Tasks for new contributors

Hey!

I'm wondering if there are any issues / tasks suitable for new contributors? I've been watching this project for a while and would be interested in contributing but where do I start?

side note: are there any docs on how to use grin as a language backend?

TypeEnv/HPT based dead code elimination

HPT reveals dead/unreachable code. Unreachable values have T_Dead type in the type environment.
Write a transformation to remove all T_Dead values from the program.
It is required or LLVM codegen because it can not handle T_Dead types.

Consider Renaming

Hi,

There is a slightly older and at least as prevalent project https://grin-tech.org/ https://github.com/mimblewimble/grin

Edit: I'm seeing both projects being undertaken seriously, as well as being received seriously within their respective communities... The more mature and successful both of these projects become, the more annoying it may become to query search engines, to talk about without confusion, etc. etc...

Fix E2E tests for OSX

When running the e2e tests on OSX, a test failed which checks the generated assembly:

  test/Test/EndToEnd.hs:245:25:
  3) End to end tests, test-data, sum-simple, pipeline-test: sum_simple_exp.grin
       expected: 	.text
                 	.file	"<string>"
                 	.globl	grinMain                # -- Begin function grinMain
                 	.p2align	4, 0x90
                 	.type	grinMain,@function
                 grinMain:                               # @grinMain
                 	.cfi_startproc
                 # %bb.0:                                # %grinMain.entry
                 	movl	$50005000, %edi         # imm = 0x2FB0408
                 	jmp	_prim_int_print         # TAILCALL
                 .Lfunc_end0:
                 	.size	grinMain, .Lfunc_end0-grinMain
                 	.cfi_endproc
                                                         # -- End function
                 	.type	_heap_ptr_,@object      # @_heap_ptr_
                 	.bss
                 	.globl	_heap_ptr_
                 	.p2align	3
                 _heap_ptr_:
                 	.quad	0                       # 0x0
                 	.size	_heap_ptr_, 8


                 	.section	".note.GNU-stack","",@progbits

        but got: 	.section	__TEXT,__text,regular,pure_instructions
                 	.macosx_version_min 10, 13
                 	.globl	_grinMain               ## -- Begin function grinMain
                 	.p2align	4, 0x90
                 _grinMain:                              ## @grinMain
                 	.cfi_startproc
                 ## %bb.0:                               ## %grinMain.entry
                 	movl	$50005000, %edi         ## imm = 0x2FB0408
                 	jmp	__prim_int_print        ## TAILCALL
                 	.cfi_endproc
                                                         ## -- End function
                 	.globl	__heap_ptr_             ## @_heap_ptr_
                 .zerofill __DATA,__common,__heap_ptr_,8,3

                 .subsections_via_symbols

On first look, code seems to be ok (I see the sum constant in the assembly). Should we make it so there are 2 files it can check against depending on current OS?

How best to produce executable machine code from grin

This low level stuff isn't really my area (I usually just write Haskell), and I had a bit of trouble with this. In particular:

  • Trying to properly control the grin cli via its options only caused it to crash with pattern match failures.
  • No main / entry point is produced.
  • _prim_int_print is an unknown symbol. I found a note saying it was ad-hoc and would be removed, but it seems that references to it can be produced even when it's not used directly.

For now I've muddled my way to a (perhaps naive) solution:

prim.c

#include <stdio.h>
#include <stdint.h>

int64_t grinMain();

void _prim_int_print(int64_t i) {
  printf("%ld", i);
}

int main() {
  return (int)grinMain();
}

compile

#!/usr/bin/env sh

grin "$1.grin"                      &&
llc-7 -filetype=obj .output/*.ll    &&
mkdir -p exes                       &&
gcc -o "exes/$1" .output/*.o prim.c &&
rm -r .output/

This works for some small grin snippets such as:

test-return.grin

grinMain = pure 42

test-print.grin

grinMain =
  j <- pure (CInt 23)
  print j

print i =
  (CInt i') <- pure i
  _prim_int_print i'

test-add.grin

grinMain =
  n1 <- pure (CInt 42)
  n2 <- pure (CInt 10000)
  n3 <- add n1 n2
  (CInt r) <- pure n3
  _prim_int_print r

add m n =
  (CInt m') <- pure m
  (CInt n') <- pure n
  s1 <- _prim_int_add m' n'
  s2 <- pure (CInt s1)
  pure s2

test-indirect-add.grin

grinMain =
  t1 <- store (CInt 1)
  t2 <- store (CInt 10000)
  t3 <- store (Fadd t1 t2)
  (CInt r') <- eval t3
  _prim_int_print r'

add m n =
  (CInt m') <- eval m
  (CInt n') <- eval n
  b' <- _prim_int_add m' n'
  pure (CInt b')

eval q =
  v <- fetch q
  case v of
    (CInt x'1) -> pure v
    (Fadd a b) -> w <- add a b
                  update q w
                  pure w

However it doesn't work for the examples in test-data/dead-data-elimination, and I'm not sure whether the issue originates in the examples themselves (are they currently expected to fail?), in the grin compiler, or in my process. Compiling the pnode source fails with

PipelineStep: JITLLVM
grin: user error (Pattern match failure in do expression at src/Reducer/LLVM/JIT.hs:71:13-38)

which presumably is actually one of the first two options, but more troubling is that the length source does compile, then execution fails with a segmentation fault.

End-to-end test framework

Currently there are simple unit tests as part of the testing process. I would like to extend this with an end-to-end approach similar I implemented in the idris-grin test library.

The current situation we have:
We run unit tests using Hspec and report the coverage of the result. After the unit tests we run the grin compiler on the sum_simple example. The build fails if there are test failures or some exception during the compilation of the sum_simple.grin. The output of the sum_simple compilation is never checked. As the project grows an gains popularity and user base we need a more thorough testing.

The use-case for this extension is already discussed:

  • It would be good to maintain a set of example grin files in the repository and use them during the development and after during part of the regression testing
  • Test cases from x-grin back-ends would be easily added to the future regression suite

I propose the following: The end-to-end test suite should

  • run after the unit test phase
  • have a test data directory where all the test grin files should live
  • be able to read the grin files from textual or binary format
  • grin files which have main modules should be interpreted (see details below)
  • grin files which does not have main should have an option yaml file which sets the pipeline options and an expected output file
  • add to the coverage report

Interpreted tests: The end-to-end suite should

  • run the interpreter on the test grin file without any optimizations, save the output as expected result
  • run the interpreter on the test grin file with the optimization pipeline, and compare the output with the expected output. If there is a difference it should bisect the intermediate optimization steps to pinpoint the where the output started to diverge from the expected.
  • generate an executable from the optimized version and check the output against the expected.

Thoughts on the binary format: Binary input format is not preferred. Currently it is a short cut to overcome some issues in the parser. Backward compatibility for the end-to-end tests are not required. Whenever we will need to use binary format in the end-to-end test suite it indicates some regression in the parser. After fixing the regression in the parser the binary stored test data should be converted to textual grin. This can be done using the grin compiler.

What LLVM versions are supported by the codegen?

On the documentation, LLVM 7 is used. I am just curious if any version of LLVM can be used or if v7 is a hard requirement. If it is a hard requirement, do you have any estimate of the work that would be involved in bringing it up to say the 12.x version?

Foreign pointer primitive type

This is needed for various features of many functional languages, eg arbitrary precision integers and arrays implemented in FFI.

If there's only support for foreign pointers, that makes code-gen easier as the layout is consistent, however having structs would be useful, eg for interfacing with pass-by-value C libraries.

Confused about non-covered alternatives and RunPure

Hi!

I've made a simple frontend for GRIN and struggle with the optimizations.

  1. When running stack exec grin -- --optimize --no-prelude --print-errors --continue-on-failed-lint dump.grin for this I get a lot of
case has non-covered alternative CError
case has non-covered alternative CFalse
case has non-covered alternative CNode
case has non-covered alternative CNone
case has non-covered alternative CTrue
case has non-covered alternative CVal
....

which fails the linting and I guess it is the result of eval inlining and case alternatives removal, but I don't know why linting should fail then. And finally, I get my process killed on RunPure stage:

PHASE #3
  PipelineStep: T BindNormalisation                                                           had effect: None (0.001100 ms)
  PipelineStep: T SimpleDeadVariableElimination                                               
  Analysis
   PipelineStep: Eff CalcEffectMap                                                            (0.000600 ms)
  had effect: None (1.962300 ms)
  PipelineStep: T NonSharedElimination                                                        
  Analysis
   PipelineStep: Sharing Compile                                                              (0.000800 ms)
   PipelineStep: Sharing RunPure                                                              iterations: Killed
  1. When running stack exec grin -- --optimize --no-prelude --print-errors --continue-on-failed-lint dump_2.grin for this I get the same but the process gets killed on hpt stage:
PipelineStep: T SparseCaseOptimisation                                                      had effect: None (0.696800 ms)
  PipelineStep: T CaseHoisting                                                                had effect: None (0.945100 ms)
  PipelineStep: T GeneralizedUnboxing                                                         had effect: None (0.008400 ms)
  PipelineStep: T ArityRaising                                                                had effect: None (0.040300 ms)
  PipelineStep: T InlineApply                                                                 had effect: None (1.803600 ms)
  PipelineStep: T LateInlining                                                                Invalidating type environment
  had effect: ExpChanged (12.713100 ms)
  PipelineStep: SaveGrin (Rel "LateInlining.grin")                                            (13.689900 ms)
  PipelineStep: T BindNormalisation                                                           had effect: ExpChanged (0.001200 ms)
  PipelineStep: SaveGrin (Rel "BindNormalisation.grin")                                       (23.816600 ms)
  PipelineStep: T InlineEval                                                                  
  Analysis
   PipelineStep: HPT Compile                                                                  (0.000400 ms)
   PipelineStep: HPT RunPure                                                                  iterations: 144 Killed

and sometimes I also have some type env errors after HPT.

So the question is whether it is supposed to be like this and are there a set of transformations/optimizations/workarounds that are guaranteed to proceed successfully and output simple enough GRIN to compile it into hardware?

P.S. both examples work fine in --eval

Fix data dependencies in pipeline

With the introduction of the extended syntax, there were some modifications to how certain transformations receive what kind of analysis results. This should be reviewed and cleaned up.

Modular PrimOps

The GRIN frontends should not depend on the built-in evaluation of the primitive operations. Currently we are able to define the ffi and primitive operations, but their interpretation are still tied to the Reducer.Eval, Reducer.LLVM.CodeGen modules.

The use case: I as a frontend developer, want to create my own primitive operations which suits the best to the compiler I work on. To do that, I have to define the set or primitives. Those primitives configured in the ffi/pure section in my GRIN prelude.

Stages:

  1. I would like to add their implementation to the Pure Evaluator to get the semantics right.
  2. Implement my primitive operations in the prim_ops.h / prim_ops.c which are linked during the executable generation phase
  3. Extend the LLVM codegen which my implementation for the primitives in a modular/pluginable way.

Gitter channel?

It would be very cool if there was a gitter channel to discuss things in general about GRIN. I would love to have discussions about the general direction of the project, etc.Please consider creating a channel!

Changing the order of `Transformations` in `defaultOptimizations` improves compilation

defaultOptimizations looks like this:

defaultOptimizations :: [Transformation]
defaultOptimizations =
  [ ...
  , InlineEval
  , InlineApply
  , LateInlining
  ]

but if I make sure that InlineEval is run first:

defaultOptimizations :: [Transformation]
defaultOptimizations =
  [ InlineEval
  , ...
  , InlineApply
  , LateInlining
  ]

then my generated code improves. Urban's thesis says that inlining eval and apply are the first simplifications that need to be applied. Maybe this explains the behaviour?


Here is the relevant GRIN code:

eval p.0 =
  v.0 <- fetch p.0
  case v.0 of
    #default ->
      pure v.0
    (Fv1 a.0) ->
      res.0 <- v1 $ a.0
      update p.0 res.0
      pure res.0
    (Fv2 a.1) ->
      res.1 <- v2 $ a.1
      update p.0 res.1
      pure res.1

apply p.1 x.0 =
  case p.1 of
    (P1v1) ->
      v1 $ x.0
    (P1v2) ->
      v2 $ x.0

v1 v3 =
  v4 <- eval $ v3
  pure v4

v2 v5 =
  v6 <- eval $ v5
  pure v6

grinMain =
  v7 <- store (CInt 888)
  v8 <- store (Fv2 v7)
  v9 <- store (Fv1 v8)
  (CInt v10) <- eval $ v9
  v11 <- _prim_int_print $ v10
  pure v11

"Inline Eval Late":

eval.unboxed p.0 =
  v.0 <- fetch p.0
  v.1 <- pure v.0
  case v.1 of
    #default ->
      (CInt unboxed.CInt.0) <- pure v.1
      pure unboxed.CInt.0
    (Fv1 a.0) ->
      eval.unboxed $ a.0
    (Fv2 a.1) ->
      eval.unboxed $ a.1

grinMain =
  v.5 <- pure (CInt 888)
  v.2 <- pure v.5
  v7 <- store v.2
  v.6 <- pure (Fv2 v7)
  v.3 <- pure v.6
  v8 <- store v.3
  v.7 <- pure (Fv1 v8)
  v.4 <- pure v.7
  v9 <- store v.4
  unboxed.CInt.1 <- eval.unboxed $ v9
  _prim_int_print $ unboxed.CInt.1

"Eval inline early":

grinMain =
  a.1.0.58.0.arity.1.0 <- pure 888
  _prim_int_print $ a.1.0.58.0.arity.1.0

Idris backend TODOs

  • Support String literal in parser
  • Handle different Int types
  • Better FFI
    ...

How to cite paper

How to cite the paper 'A modern look at GRIN,an optimizing functional language back end∗', I can't find its source

cleanup codebase

folder: AbstractInterpretation

  • delete AbstractRunGrin.hs
  • delete HPTResult.hs
  • rename HPTResultNew.hs to HPTResult.hs
  • rename Pretty.hs to PrettyIR.hs

move base modules under Grin namespace:

  • Grin.hs
  • ParseGrin.hs
  • Pretty.hs
  • TypeCheck.hs
  • TypeEnv.hs

move test related modules under Test/Check/Lint namespace:

  • Assertions.hs
  • Check.hs
  • Grammar.hs
  • Gspec.hs
  • PrimOps.hs
  • Test.hs
  • VarGen.hs

Transformations

  • delete AssignStoreIDs.hs
  • delete Playground.hs
  • delete Rename
  • delete Substitution

Transformations/Simplifying ; postpone

  • delete RightHoistFetch.hs

move pipeline realted modules under Pipeline namespace:

  • Eval.hs
  • Optimizations.hs
  • Pipeline.hs

REFACTOR / CLEANUP

Grin.hs:

  • keep only the AST definition and move everything else to a different module ; this must be very simple
  • delete Loc and Undefined from Val ; reducers should use their own value types

split projects

  • create separate cabal files for frontends in a separate folder (but in the same repo and stack env)
  • move Frontend.Lambda to Grin.Lambda

Extending the Grin syntax with primops

I've noticed that primitive ops are referenced using variables with special names - (_prim_int_print) and the like.

I think it would be better to have these primitives as a sum type, and extend Val with a Prim constructor.

data Val = Var Name | ... | Prim Prim
data Prim
  = Prim_print
  | Prim_add
  | ...

What do you think?

I'm also thinking about bringing other LLVM features in, like arrays or structs, which leads me to wonder: does this project plan to only support LLVM? Or do you intend to be able to target other backends? I ask because I think the answer might influence the design for primitives.

Also this isn't a feature request - I'm happy to implement my ideas if they're acceptable.

Support for external definitions

Add syntax and data-flow solver support for user defined external functions. e.g.

  • primitive operations
  • foreign functions

Proposed syntax:

  primop effectful
    _prim_int_print     :: Int -> Unit
    _prim_read_string   :: String
    _prim_string_print  :: String -> Unit
    _prim_read_string   :: String

    newArrayArray# :: {Int#} -> {State# s} -> {GHC.Prim.Unit# {MutableArrayArray# s}}

  primop pure
    _prim_string_concat   :: String -> String -> String
    _prim_string_reverse  :: String -> String
    _prim_string_eq       :: String -> String -> Bool
    _prim_string_head     :: String -> Int
    _prim_string_tail     :: String -> String
    _prim_string_cons     :: Int -> String -> String
    _prim_string_len      :: String -> Int

Binary download?

Is it possible to have a binary executable of grin that runs on a Haskell-independent machine? That the binary parses some ir (generated from functional languages) and do the compilations?
In this way we may easily have more front-end ported to grin.

Clang dependency for tests

I think we should mention the clang dependency for tests on the test-data, idris-grin.

I have a passing stack build and failing stack build on Fedora 31 with LLVM 7 installed but not Clang. I would have put in a pull request but I don't have a Mac or Debian-based computer setup.

Garbage Collection

It would be great if GRIN would come with built-in garbage collection

Improve case hoisting

The current implementation of CaseHoisting can be improved by utilizing the new syntax. With the introduction of the extended syntax, we can perhaps store the type of the return value of a given case alternative. AS of now, HPT tracks the alt name as a restricted version of the scrutinee, but there might be ways around this.

Stop pipeline on errors and add --continue-on-lint option for the current behavior.

In the Issue #75 the root cause was a non-compliant GRIN program. Although the evaluator was able to run such a program, but it break some assumptions. Those assumptions must have been caught by the linter. The linter couldn't fire because there was no active type-env for the program, which was caused by the HPT result being empty due to the non-compliant program. We should stop the pipeline by default when the HPT-can not be computed, on linter errors, and this restriction should be elevated when a --non-safe option is given to the compiler.

Syntactical extensions for GRIN

Syntactical extensions for GRIN

This document proposes new syntactic constructs for the GRIN IR, as well as modifications to some of the already existing ones.

Motivation

The current syntax of GRIN can pose some difficulties for analyses. As an example, the created-by analysis requires each binding to have a variable pattern. That is, each binding must have name. Also, analyzing programs where every intermediate value has an explicit name is much easier.

Furthermore, the current Haskell definition of the syntax allows certain erroneous programs. Currently, we rely on the linter to reject these programs, but a more rigid definition of the syntax could prevent errors statically.

Datalog based analysis

Currently, the analyses are implemented via compiled abstract interpretation, where the abstarct program is generated from Haskell and it can be run in both Haskell and C++. However, in the future, we plan to transition to Soufflé Datalog based analyses. In order to build a Datalog model for the GRIN AST, every single instruction must have a name. It is not a question of convenince, the model requires it. This is yet another reason to give an explicit name to everything in the AST.

Note: Soufflé has many advantages over our current implementation, but this is out of the scope of this proposal.

Naming

These syntactic constraints mainly deal with the naming of intermediate values. They make sure, that most of the currently available values can always be referred to by an explicit name. In other words, constants could only be introduced through the pure function. Everywhere else, we would be passing variables around. All of these constraints can be implemented trivially. Namely, we only have to change the below constructions so that they require a Name instead of a value.

  • named case scrutinees
  • named node fields
  • name function arguments
  • named store argument
  • named case alternatives (needed for the Soufflé Datalog AST model)

An observant reader might have noticed, that the pattern matching construct for bindings isn't addressed above. That is because we will deal with them separately. Also, note that fetch and update already only accept variable names as arguments.

New

These syntactic constructs are completely new to the GRIN language.

@patterns

As mentioned earlier, each binding should have a name, the variable it binds. Currently, the syntax allows for pattern bindings, which do not bind a variable to the left-hand side computation. This problem could be fixed by introducing @patterns.

(CInt k)@v <- pure n
<both v and k can be referenced here>

@patterns combine the flexibility of value patterns with the rigidness of the naming convention. By removing value patterns from the language, and introducing @patterns, we could achieve the same expressive power as before, meanwhile making sure that everything can be referenced by an explicit name.

Note that unlike the @pattern in Haskell, here the variable name and the pattern are swapped. This is to keep the syntax consistent with named case alternatives.

Furthermore, we should restrict the available patterns only to nodes. Currently literals and even variable names are allowed. This is too general.

Function application (scrapped)

Currently, normal functions and externals share the same syntactic node for application. Externals are stored together with GRIN programs, and they are differentiated from normal functions by looking up the applied function's name in the stored external list. This makes analyzing function applications quite inconvenient.

We could introduce different syntactic nodes for normal functions and externals, but that would pose an unnecessary overhead on analyses and transformations in certain use cases. Instead, we will introduce different types of names for functions and externals. The application node will differentiate functions from externals by wrapping their names in different data constructors.

data AppName
  = Fun { appName :: Name }
  | Ext { appName :: Name }

Exp = ... | SApp AppName [Name] | ...

Named case alternatives

To model the GRIN AST in Datalog, each syntactic construct must have an identifier. Currently, the alternatives in case expression don't have unique identifiers. We can solve this issue by naming each alternative.

case v of
  (CNil)       @ v1 -> <code>
  (CCons x xs) @ v2 -> <code>

The syntax would be similar to that of the @patterns, but the variable name and the pattern would be swapped. This is to improve the readabilityo of the code: with long variable names, the patterns can get lost. Also, there could be an arbitrary number of spaces between the @ symbol and the pattern/variable. This is also to improve readability though proper indentation. Readability is only important for unit tests, not for generated GRIN code.

Semantics

The names of the case alternatives would be the scrutinee restricted to the given alternative. For example, in the above example, we would know that v1 must be a node costructed with the CNil tag. Similarly, v2 is also a node, but is constructed with the CCons tag.

Named alternatives would make dataflow more explicit for case expressions. Currently, the analyses restrict the scrutinee when they are interpreting a given alternative (they are path sensitive), but with these changes, that kind of dataflow would made visible.

Structural

These modifications impose certain structural constraints on GRIN programs.

Last pure

Binding sequences should always end in a pure. This will make the control flow a little bit more explicit. However, this change could be impacted by the introduction of basic blocks. It might be wiser to delay this change.

Program, function, definition

The above-mentioned notions should be different syntactic constructs. They should form a hierarchy, so it is always clear whether a transformation/analysis works on an entire GRIN program, a single function, or only on an expression.

Currently available, but has to be more precise

These modifications would turn currently linter-checked properties into static constraints. For example, only real expressions could appear on the left-hand side of a binding, or the alternatives of a case expression could really only be alternatives.

No longer needed

By introducing the above mentioned syntactic modification, some currently available constraints become redundant: LPat, SimpleVal.

Also, some of the low-level GRIN constructs will be removed from the AST: VarTag node and indexed Fetch. These constructs were originally needed for RISC code generation. Since then, GRIN transitioned to LLVM, and over the years these constructs proved to be unnecessary.

Questions

Parsing function applications (scrapped)

The current syntax does not differentiate normal function application from external function application. The new syntax will differentiate them. This means, we must decide whether a name corresponds to a normal function or to an external while parsing. Two solutions that might work are the following.

Use previoulsy parsed information (scrapped)

We could add some sort of state to the parsers that keeps track of the available externals. Since externals can be added through the PrimOpsPrelude, we would also need to pass those as an extra argument to the parser.

Naming convention for externals (scrapped)

We could introduce a new naming convention for externals. They would always begin with an undersocre.

Implicit parsing of unit patterns

Currently, the parser can implictly parse bindings that have a unit pattern. The most common example for this is update. The string "update p v" is parsed as if it was "() <- update p v". The new syntax does not allow unit patterns. We must think of an alternative way to express this. Maybe a wild card patterns?

_ <- <lhs>
<rhs>

Also, wild card patterns could be parsed implicitly as well:

<lhs>
<rhs>

Answer

Every instruction must have a name, even function calls that only return a unit. Binding their result to a variable is necessary for unique identification, hence, the idea of unit and wildcard patterns are scrapped.

Future

Basic blocks

As for the future, we plan to introduce basic blocks into the language. This will bring its own syntactic modifications, but they will be mostly independent of the above-discussed changes.

GADT-style AST

Also, GRIN will have a GADT-style based AST as well. This is for the front end users of GRIN who want to generate well-structured GRIN programs. It will make all syntactic restrictions explicit, hence it will force the user to build a syntactically sound AST.

Represesnting the AST with GADTs has a serious drawback though: recursion schemes don't support GADTs at the moment. This means that neither the currently implemented analyses, nor the transformations will work on the GADT-style AST. This is the reason why we will only use it for the GRIN code generation. The front end will generate a GADT-style GRIN AST, then we will transform it into a plain old ADT, so that we continue working with it.

Prototype AST

The below examples only include the changes regarding to the naming convention.

New constructs

data BPat
  = VarPat { bPatVar :: Name }
  | AsPat  { bpatVar :: Name
           , bPatVal :: Val  -- TODO: This will be restricted in the future.
           }
  | WildCard

data AppName
  = Fun { appName :: Name }
  | Ext { appName :: Name }

Changes

data Val
  -- CHANGE: Name
  = ConstTagNode  Tag  [Name]
  -- CHANGE: Name
  | VarTagNode    Name [Name]
  | ValTag        Tag
  | Unit
  -- simple val
  | Lit Lit
  | Var Name
  | Undefined     Type

data Exp
  = Program     [External] [Def]
  | Def         Name [Name] Exp
  -- Exp
  -- CHANGE: BPat
  | EBind       SimpleExp BPat Exp
  -- CHANGE: Name
  | ECase       Name [Alt]
  -- Simple Exp
  -- CHANGE: Name
  | SApp        AppName [Name]
  | SReturn     Val
  -- CHANGE: Name
  | SStore      Name
  | SFetchI     Name (Maybe Int)
  -- CHANGE: Name
  | SUpdate     Name Name
  | SBlock      Exp
  -- Alt
  | Alt CPat Name Exp

SimpleDeadParameterElimination causes "undefined variable" errors

GRIN Version: 310fdc3a184353049213d28c84980345d1ff66fd (current master)

Save the following GRIN program in src.grin and then optimise it with grin src.grin --optimize.

grinMain =
  u.box <- pure (CUnit)
  u.thunk <- store u.box
  w.box <- pure (Ffoo u.thunk u.thunk)
  w.thunk <- store w.box
  foo $ w.thunk u.thunk

foo a.thunk b.thunk =
  (Ffoo c.thunk d.thunk) <- fetch a.thunk
  (CUnit) <- fetch d.thunk
  out.prim_int <- pure 0
  _prim_int_print $ out.prim_int

Optimisation will fail with output:

 PipelineStep: Optimize                                                                       PHASE #1
  PipelineStep: T BindNormalisation                                                           had effect: None (0.001351 ms)
  PipelineStep: T SimpleDeadFunctionElimination                                               had effect: ExpChanged (0.001667 ms)
  PipelineStep: SaveGrin (Rel "SimpleDeadFunctionElimination.grin")                           (0.311913 ms)
  PipelineStep: T SimpleDeadParameterElimination                                              had effect: ExpChanged (0.001111 ms)
  PipelineStep: SaveGrin (Rel "SimpleDeadParameterElimination.grin")                          (0.223380 ms)
 error after SimpleDeadParameterElimination:
undefined variable: d.thunk

illegal code

(Note: SimpleDeadFunctionElimination reports had effect: ExpChanged yet its output is identical to the original program.)

The SimpleDeadParameterElimination step eliminates b.thunk and d.thunk, but (CUnit) <- fetch d.thunk is left intact.
Below is .output/002.SimpleDeadParameterElimination

grinMain =
  u.box <- pure (CUnit)
  u.thunk <- store u.box
  w.box <- pure (Ffoo u.thunk)
  w.thunk <- store w.box
  foo $ w.thunk

foo a.thunk =
  (Ffoo c.thunk) <- fetch a.thunk
  (CUnit) <- fetch d.thunk
  out.prim_int <- pure 0
  _prim_int_print $ out.prim_int

Linting src.grin with --lint shows that the program is correct, so this appears to be due to either a bug in the linter, or a bug in SimpleDeadParameterElimination.

This problem initially arose when I was trying to optimise a generated program (which is also correct according to the linter).
That generated program is 282 lines long, so I opted to hand-write a minimal example instead.

Counting Immutable Beans

Implementation of the Counting Immutable Beans (CIB) for the GRIN compiler.

Summary

CIB uses instrumentation of the original program. There are four new instructions in the syntax
that are inserted via instrumentation. This can be categorized into two;

  • reference counter instructions:
    • inc
    • dec
  • heap location reuse:
    • reset
    • reuse

In the CIB approach every heap location has a reference counter associated with it. Inc increments the counter for the location, and also increments all the referred locations transitively.
Dec decrements the counter for the location, and also decrements all the referred locations transitively.

Reset, reuse:

From the CIB paper:

let y = reset x.

If x is a shared value, than y is set to a special pointer value BOX, otherwise to the heap location associated with x.
If x is not shared than reset decrements the reference counters of the components of x, and y is set to x.

let z = reuse y in ctor_i w.

If y is BOX reuse allocates a new heap for the constructor.
If y is not Box the runtime reuses the heap location for storing the constructor.

Application of the same idea for GRIN:

Differences: meanwhile Lean's IR put every variable on the heap, GRIN uses variables as were registers and only a subset of the registers are associated with heap locations. A register is associated with heap location if its type is Loc. This means the GRIN implementation of the CIB approach needs a type environment which tells which variables can be affected by the CIB operations.

In GRIN:

  • The CIB instrumentation should happen after the optimization steps.
  • Special interpreter should be implemented which handles the CIB instructions.
  • Probably it should have its own LLVM code generator and LLVM implemented runtime, preferably a plugin for the existing one.

We need to add 4 new instructions:

  • x <- reset y; where y is a heap location, x can be a special heap location, which can be BOX too.
  • z <- reuse x y; where x is a special heap location created by reset, and y is a Node value.
  • z <- inc x; where x is a heap location, it transitively increments the reference counters in the locations. Cycle detection should happen. The increment operation computes unit as its return value.
  • z <- dec x; where x is a heap location, it transitively decrements the reference counters in the locations. Cycle detection should happen. When the counter reaches zero, the runtime must deallocate the location. The decrement operation computes unit as its return value.

The GRIN nodes store primitive values, but the runtime makes the difference between location values and primitive values, thus it is able to create the transitive closure of the reachability relation of a location and manipulate its reference counters.

Every of the four instructions needs to be implemented in the GRIN runtime/interpreter.

In the original paper reuse of the constructors could happen only of the arity of the constructors
are the same. But in GRIN as the runtime needs to allocate heaps based on the type of the heap location. This means every heap location can have its own arity, and reuse if the heap location is possible only if the new node does not exceeds the arity of the heap node. Otherwise a new node needs to be allocated, with the maximum arity.

The most important change is the reuse construction. It changes certain instances of the
store operation to the reuse operation.

Before:
x <- store y;

After:
z <- reset w;
...
x <- reuse z y;

In this case we need to decide to reuse the heap location associated with w only if w can accommodate all the possible values of x. This means the max-arity(w) >= max-arity(x). Meanwhile Lean's approach uses the arity of the constructors in the alternatives, we can use the abstract information of all the possible runs.

Implementation steps:

  1. Import abstracting the definitional interpreters from the mini-grin repo
  2. Change the implementation to use base functors instead of Expr datatype
  3. Implement reference statistics with the new interpreter, as a warm-up exercise
  4. Implement CIB program instrumentation for GRIN producing ExprF :+: CibF AST
  5. Implement interpreter for CIB extended GRIN program
  6. Extra: Implement LLVM codegen plugin for CIB instructions

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.