Monday, December 29, 2014

Nursery sizes.

Intel i5-3210M cpu, 3072 KB L3 cache. Not sure why the CPU stalls with the tiny nurseries.

Friday, December 12, 2014

Test suite for Haskell2010

To keep track of progress and to ward off regressions, the test suite now have a section for Haskell2010 compatibility checks:

# runhaskell Main.hs -t Haskell2010 --plain | tail -n 4
         Test Cases  Total
 Passed  0           0
 Failed  6           6
 Total   6           6

The tests only cover a small part of the Haskell2010 specification and none of them pass yet.

Thursday, December 4, 2014

Compiling to JavaScript.

Lots of very interesting things are possible when everything (including the runtime system) is translated to LLVM IR. For example, compiling to JavaScript becomes trivial. Consider this ugly version of Hello World:

{-# LANGUAGE MagicHash #-}
module Main (main) where

import LHC.Prim

putStrLn :: List Char -> IO Unit
putStrLn msg = putStr msg `thenIO` putStr (unpackString# "\n"#)

main :: IO Unit
main = putStrLn (unpackString# "Hello World!"#)

entrypoint :: Unit
entrypoint = unsafePerformIO main

Notice the 'List' and 'Unit' types, and the 'thenIO' and  'unpackString#' functions. There's no syntactic sugar in LHC yet. You can get everything sugar-free these days, even Haskell compilers.

Running the code through the LLVM dynamic compiler gives us the expected output:

# lli Hello.ll
Hello World!

Neato, we have a complete Haskell application as a single LLVM file. Now we can compile it to JavaScript without having to worry about the garbage collector or the RTS; Everything has been packed away in this self-contained file.

$ emcc -O2 Hello.ll -o Hello.js # Compile to JavaScript using
                                # emscripten.
$ node Hello.js                 # Run our code with NodeJS.
Hello World!

$ ls -lh Hello.js               # JavaScript isn't known to be
                                # terse but we're still smaller
                                # than HelloWorld compiled with GHC.
-rw-r--r--  1 lemmih  staff   177K Dec  4 23:33 Hello.js

Friday, November 28, 2014

The New LHC.

What is LHC?

The LLVM Haskell Compiler (LHC) is a newly reborn project to build a working Haskell2010 compiler out of reusable blocks. The umbrella organisation for these blocks is the haskell-suite. The hope is that with enough code reuse, even the daunting task of writing a Haskell compiler becomes manageable.

Has it always been like that?

No, LHC got started as a fork of the JHC compiler. A bit later, LHC was reimagined as a backend to the GHC compiler.

Can LHC compile my code?

LHC can only compile very simple programs for now. Stay tuned, though.

Where's development going next?

  1. Better support for Haskell2010.
  2. Reusable libraries for name resolution and type-checking.
  3. Human-readable compiler output. With LLVM, optimisations are less important. We instead focus on generating pretty code.

Tuesday, November 25, 2014

Very minimal Hello World.

The LLVM Haskell Compiler finally coming together. From Haskell parser to name resolution to type checker to desugarer to LLVM backend to GC. Everything is held together with duct tape but it feels great to finally compile and run Hello World.

# cat Hello.hs
{-# LANGUAGE MagicHash #-}
module Main (main) where

import LHC.Prim

main :: IO Unit
main =
  puts "Hello Haskell!"# `thenIO`
  return Unit

entrypoint :: Unit
entrypoint = unsafePerformIO main

Compiling the above file yields a single LLVM program, containing user code and the RTS.

# lli Hello.ll
Hello Haskell!

Tuesday, October 19, 2010

Rough organizational overview.

The exact details are constantly changing but here's a rough overview of the LHC pipeline.
  1. External Core.
    We've designed our compiler to use GHC as its frontend. This means that GHC will handle the parsing and type-checking of the Haskell code in addition to some of the optimization (GHC particularly excels at high-level local optimizations). LHC benefits greatly by automatically supporting many of the Haskell extensions offered by GHC.
    Notable characteristics: Non-strict, local functions, complex let-bindings. Pretty much just Haskell code with zero syntactic sugar.
    Example snippet:

    base:Data.Either.$fShowEither :: ghc-prim:GHC.Types.Int =
    ghc-prim:GHC.Types.I# (11::ghc-prim:GHC.Prim.Int#);

  2. Simple Core.
    Since External Core isn't immediately ready to be processed into GRIN code, we first translate it to Simple Core by removing or simplifying out a couple of features. The most noticeable feature of External Core is locally scoped functions which simply does not fit in with the GRIN model. When translating to Simple Core, we hoist out all local functions to the top-level.
    Notable characteristics: Non-strict, no local functions, simplified let-bindings.
  3. Grin Stage 1.
    Let me start by introducing GRIN: GRIN (Graph Reduction Intermediate Notation) is a first order, strict, (somewhat) functional language.
    The purpose of this first stage of grin code is to encode the laziness explicitly. It turns out that you can translate a lazy language (like Simple Core) to a strict language (like GRIN) using only two primitives: Eval and apply. The 'eval' primitives takes a closure, evaluates it if need be and returns the resulting object. The 'apply' primitives simply adds an argument to a closure. Haskell compilers such as GHC, JHC and UHC all use this model for implementing laziness.
    Notable characteristics: Strict, explicit laziness, opaque closures.
    Example snippet:

    base:Foreign.C.Types.@lifted_exp@ w ws =
    do x2508 <- @eval ws
    case x2508 of
    (Cbase:GHC.Int.I32# x#)
    -> do x2510 <- unit 11
    base:GHC.Show.$wshowSignedInt x2510 x# w

  4. Grin Stage 2.
    At the time of writing, each of the mentioned compilers stop at the previous stage (or at what would be their equivalent of that stage).[1] LHC follows in the footsteps of the original GRIN compiler and applies a global control-flow analysis to eliminate/inline all eval/apply primitives. In the end, a lazy/suspended function taking, say, two arguments simply becomes a data constructor with two fields.
    Notable characteristics: Strict, transparent closures.
    Example snippet:

    base:Foreign.Marshal.Utils.toBool1_caf =
    do [x2422] <- constant 0
    [x2423] <- @realWorld#
    [x2424 x2425] <- (foreign lhc_mp_from_int) x2422 x2423
    [x2426] <- constant Cinteger-gmp:GHC.Integer.Type.Integer
    unit [x2426 x2425]

  5. Grin Stage 3.
    Things are starting to get fairly low-level already at stage 2. However, stage 2 is still a bit too high-level for some optimizations to be easily implemented. Stage 3 breaks the code into smaller blocks that can easily be moved, inlined and short-circuited. The code is now sufficiently low-level that it can be pretty-printed as C.
    Notable characteristics: Functions are broken down to functional units. Otherwise same as stage 2.
    Example snippet:

    base:GHC.IO.Encoding.Iconv.@lifted@_lvl60swYU38 rb3 rb4 =
    do [x21578] <- @-# rb4 rb3
    case x21578 of
    0 -> constant Cghc-prim:GHC.Bool.False
    () -> constant Cghc-prim:GHC.Bool.True

  6. Grin--.
    Grin-- is the latest addition to the heap and not much is known about it for certain. It is even up for debate whether it belongs to the GRIN family at all since it diverge from the SSA style.
    The purpose of Grin-- is to provide a vessel for expressing stack operations.
    Notable characteristics: Operates on global virtual registers, enables explicit stack management.
    Example snippet:

    do x21578 := -# rb4 rb3
    case x21578 of
    0 -> do x88175 := Cghc-prim:GHC.Bool.False
    () -> do x88175 := Cghc-prim:GHC.Bool.True

Feel free to ask if you have any questions on the how and why of LHC.

[1] UHC does have the mechanics for lowering the eval/apply primitives but it is not enabled by default.

Saturday, October 16, 2010

Accurate garbage collection.

So, let's talk about garbage collection. Garbage collection is a very interesting topic because it is exceedingly simple in theory but very difficult in practice.

To support garbage collection, the key thing a language implementor has to do is to provide a way for the GC to find all live heap pointers (called root pointers). This sounds fairly easy to do but can get quite complicated in the presence of aggressive optimizations and register allocation. A tempting (and often used) solution would be to break encapsulation and make the optimizations aware of the GC requirements. This of course becomes harder the more advanced the optimizations are and with LHC it is pretty much impossible. Consider the following GRIN code:

-- 'otherFunction' returns an object of type 'Maybe Int' using two virtual registers.
-- If 'x' is 'Nothing' then 'y' is undefined.
-- If 'x' is 'Just' then 'y' is a root pointer.
= do x, y <- otherFunction; ....

The above function illustrates that it is not always straightforward to figure out if a variable contains a root pointer. Sometimes determining that requires looking at other variables.

So how might we get around this hurdle, you might ask. Well, if the code for marking roots resides in user-code instead of in the RTS then it can be as complex as it needs be. This fits well with the GRIN ideology of expressing an much in user-code as possible.

Now that we're familiar with the problem and the general concept of the solution, let's work out some of the details. Here's what happens when a GC event is triggered, described algorithmically:
  1. Save registers to memory.
    This is to avoid clobbering the registers and to make them accessible from the GC code.
  2. Save stack pointer.
  3. Initiate temporary stack.
    Local variables from the GC code will be placed on this stack.
  4. Jump to code for marking root pointers.
    This will peel back each stack frame until the bottom of the call graph has been reached.
  5. Discard temporary stack.
  6. Restore stack pointer
  7. Restore registers.
Using the approach for exception involves stack cutting and a more advanced transfer of control which will be discussed in a later post.

In conclusion, these are the advantages native-code stack walking:
  • Allows for objects to span registers as well as stack slots.
  • Separates the concerns of the optimizer, the garbage collector and the code generator.
  • Might be a little bit faster than dynamic stack walking since the stack layout is statically encoded.