Thursday, February 5, 2009

Grin a little.

It has come to my attention that we are not using GRIN to it fullest. More specifically, it seems that the 'eval' and 'update' operations are handled by the RTS. This has unfortunate consequences for both the optimizer and the backend code.
Without an explicit control-flow graph (given by inlining eval/apply), many of our more important transformations cannot be performed. Even worse than the lost optimization opportunities is the increased complexity of the RTS. Dealing with issues of correctness is an annoying distraction from the more enjoyable endeavour of applying optimizations.

Moving away from the magical implementation of 'update' means we have to starting thinking about our memory model. The GRIN paper suggests using a fixed node size with a tail pointer for additional space if necessary. With this scheme we can update common-case nodes without allocating more heap space. However, since we're most likely to encounter complications with respect to concurrency and certain forms of garbage collection, I think a simpler approach is more apt.
Replacing nodes with indirections is very easy to implement, it doesn't clash with any optimizations (the original GRIN approach interfere with fetch movement), and it opens the door for advanced features such as concurrency.

So this is what I'll be working on in the near future. All magic has to be purged from the kingdom so logic and reason can reign supreme.


  1. does LHC's grin scale? i mean the compilation is seperated into two phases: modules one by one first(or several together), then linking. otherwise the compiler consumes impractical resources for realworld projects. since you want to inline more apply/eval, the situation seems more crucial

  2. `compilation' and `modules' in my words represent low-level ones (not some sort of intermediate code). grin in the paper assumes the whole program is under its grasp, so i'm curious of how to balance scale issue and performance penalty.

  3. jge:

    We haven't yet had to deal with scalability. MLton seems to suggests that, with the proper optimizations, control-flow analysis can be done fairly quickly.

  4. jge:

    I'm not sure what you mean. We don't convert to Grin code until after linking the modules together, anyway.

  5. Samule:
    for example, gcc compiles each c file to obj first where functions implemented in other c files are "pending", then linker rewrite pointers to "pending" functions with their implementations in other objs. A source file can't too big generally, so each step consumes managable memory.

    Grin does not have unknown functions and makes all indirections explicit which means a compiled function is also "referential transparent" (you can't pass in a pointer which is absent in its case..of..), you can't have "pending" function pointers. if two modules invokes each together, you have to put both of them in memory, so do 3, 4 or more modules. basically you have to put all modules of a connected graph component in memory. Thinking about compiling GHC in this way.

    several methods to get scalability came into my mind:
    1. allow runtime unknown functions, an additional indirection, a tradeoff
    2. develop an algorithm which dynamically write disconnected modules(case..of.. is fixed) back to disk to save memory during compilation.
    3. make all case..of.. open, compiler deals with modules one by one and produces .obj and .req for each of them. A module has only one .obj where all case..of.. are open. A module may have a .req for other modules each which stores case..of.. that will be merged to other module. Compiler keeps merging .obj and .req to new .obj until no .req is present. a lot of alrorithm and data structure is needed here, say dependency analysis of modules can speed up convergence.

    grin is great, so these're my random/rough thoughts to make it scalable, you possibly already have some of them in your code.

    btw: i'm not native english speaker, some sentences i typed are probably weird to you.

  6. jge:
    We have no plans for improving scalability. Incremental development can easily be done with other compilers.

  7. This comment has been removed by a blog administrator.