| CARVIEW |
Select Language
HTTP/2 200
cross-origin-resource-policy: cross-origin
etag: W/"9b4a315efd43fd022fb154c16e1c9642b0e1cf3f03529f3f6d4c3b2e11a0d00c"
date: Fri, 16 Jan 2026 06:16:48 GMT
content-type: application/atom+xml; charset=UTF-8
server: blogger-renderd
expires: Fri, 16 Jan 2026 06:16:49 GMT
cache-control: public, must-revalidate, proxy-revalidate, max-age=1
x-content-type-options: nosniff
x-xss-protection: 0
last-modified: Thu, 12 Sep 2024 23:25:05 GMT
content-encoding: gzip
content-length: 21703
x-frame-options: SAMEORIGIN
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
tag:blogger.com,1999:blog-1862878851303132605 2024-09-12T19:25:05.888-04:00 Notes on the LHC No, not that Hadron Collider. The Haskell Compiler. SamB https://www.blogger.com/profile/06560268240719951351 noreply@blogger.com Blogger 36 1 25 tag:blogger.com,1999:blog-1862878851303132605.post-5380875846006713029 2016-09-10T05:39:00.000-04:00 2016-09-10T05:39:04.751-04:00 Haskell Suite: Type inference. Disclaimer: I am not an academic. The following post is akin to a plumber's guide to brain surgery.<br />
<br />
Type inference is complex and has evolved over time. In this post, I will try to explain how I see the landscape and where LHC and other Haskell compilers fit into this landscape.<br />
<br />
The beginning: The Hindley–Milner type system. 1982.<br />
The typing rules here are quite simple and every Haskeller seem to learn them intuitively. They include things like: if 'f :: A → B' and 'a :: A' then 'f a :: B'.<br />
In this system, types are <i>always</i> inferred and there must always be a single, most general type for every expression. This becomes a problem when we want higher ranked types because here a single, most general type cannot be inferred. There may be many equally valid type solutions and it has to be up to the programmer to select the appropriate one. But this cannot happen in plain HM as type signatures are only used to make inferred types less general (eg. [a] was inferred but the programmer wanted [Int]).<br />
Omitting the type signature in the following code can show us what plain HM would be like:<br />
<script src="https://gist.github.com/Lemmih/3d9e64eb6b6dd4735e823e95094555e9.js"></script>
In GHC, the snippet will run fine with the type signature but not without it.<br />
<br />
<br />
Version two: Bidirectional type system. 2000.<br />
People realised that checking the correctness of a given type is much easier than inferring a correct type. Armed with this knowledge, a new type checker was born that had two modes usually called 'up' and 'down'. The 'up' mode lifts a new correct type up from an expression and the 'down' mode that checks the correctness of a type. Because of these two modes, this kind of system was called bidirectional and it deals with higher ranked types quite well.<br />
LHC current implements this.<br />
<br />
Version three: Boxy types. 2006.<br />
At this point it had become apparent that higher ranked types didn't really play well with higher order functions. People often found themselves in situations where slight, seemingly innocent changes caused the type-checker to reject their programs. An example of this can be seen in this gist:<br />
<script src="https://gist.github.com/Lemmih/52d35e65b915a722bc377165109715ab.js"></script>
Impredicative polymorphism is required for the above code and boxy types is a stab in that direction. Bidirectional type checking was a big improvement over plain HM but it lacked granularity. Types are either 100% inferred or 100% checked with no middle ground. What if you wanted to check parts of a type and infer the rest? Well, boxy types solves exactly that problem. Boxes are added (internally, we're not making changes to Haskell here) to types and they signify an unknown that should be inferred. Now parts of types can be checked while the boxes are inferred and we're left with the best of both worlds. This is what JHC implements, btw. Boxy types was also implemented in GHC but was deemed to be too complicated.<br />
<br />
Version four: FPH, First-class Polymorphism for Haskell. 2008.<br />
Impredicative polymorphism, second attempt from the authors of boxy types. Improvements were made but the problem is still not solved.<br />
<br />
Version five: OutsideIn(X). 2011.<br />
GHC is a hotbed for experimentation in type checkers. GADTs, multi-parameter type classes, type families. These are just some of the features that makes the type-checker the largest and most complicated component of GHC. To deal with all of this, researchers came up with OutsideIn, described in a paper longer than all the previous papers put together. The algorithm is relatively simple, but, for practical reasons, implementations must reject some programs that are valid according to the specification.<br />
<br /> David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 0 tag:blogger.com,1999:blog-1862878851303132605.post-3538534700287148855 2016-09-09T06:54:00.000-04:00 2016-09-09T07:14:35.630-04:00 Haskell Suite: Scoping. This post answers why I created 'haskell-scope' even though there's already another library that addresses the same problem.<br />
<br />
There are two libraries for resolving references in Haskell source code on Hackage: haskell-names and haskell-scope. Of the two, haskell-names is the oldest, the most feature complete, and the most ambitious. It uses a very innovative scheme that allows the scope to be inspected at any point in the syntax tree. <a href="https://ro-che.info/articles/2013-03-04-open-name-resolution.html">You can read more about it in the linked article.</a> Unfortunately, all this innovation comes at a price of complexity.<br />
<br />
Here's the complete list of extensions used by haskell-names: CPP, ConstraintKinds, DefaultSignatures, DeriveDataTypeable, DeriveFoldable, DeriveFunctor, DeriveTraversable, FlexibleContexts, FlexibleInstances, FunctionalDependencies, GADTs, GeneralizedNewtypeDeriving, ImplicitParams, KindSignatures, MultiParamTypeClasses, NamedFieldPuns, OverlappingInstances, OverloadedStrings, RankNTypes, ScopedTypeVariables, StandaloneDeriving, TemplateHaskell, TupleSections, TypeFamilies, TypeOperators, UndecidableInstances, and ViewPatterns.<br />
<br />
A total of 27 extensions and many of them will never be implemented by LHC. If LHC is to compile itself one day, this obviously won't do. Enter haskell-scope: a library more plain than bread without butter. Give it an AST and it will annotate all the references. Nothing more, nothing less. David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 0 tag:blogger.com,1999:blog-1862878851303132605.post-2451541300925805234 2014-12-29T01:45:00.001-05:00 2014-12-29T01:51:37.983-05:00 Nursery sizes. Intel i5-3210M cpu, 3072 KB L3 cache. Not sure why the CPU stalls with the tiny nurseries.<br />
<center>
<iframe frameborder="0" height="315" scrolling="no" seamless="" src="https://docs.google.com/spreadsheets/d/1p8tiMbPwjx1D4bVFlmL74GvCSQT6vXx01OLDYzf4XrU/pubchart?oid=1180289639&format=interactive" width="496"></iframe>
<iframe frameborder="0" height="315" scrolling="no" seamless="" src="https://docs.google.com/spreadsheets/d/1p8tiMbPwjx1D4bVFlmL74GvCSQT6vXx01OLDYzf4XrU/pubchart?oid=1544259560&format=interactive" width="496"></iframe></center>
David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 2 tag:blogger.com,1999:blog-1862878851303132605.post-5274592598738910854 2014-12-12T11:26:00.000-05:00 2014-12-12T11:26:00.059-05:00 Test suite for Haskell2010 To keep track of progress and to ward off regressions, the test suite now have a section for Haskell2010 compatibility checks:<br />
<br />
<pre># runhaskell Main.hs -t Haskell2010 --plain | tail -n 4
Test Cases Total
Passed 0 0
Failed 6 6
Total 6 6
</pre>
<br />
The tests only cover a small part of the Haskell2010 specification and none of them pass yet. David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 0 tag:blogger.com,1999:blog-1862878851303132605.post-3955460671800929581 2014-12-04T21:21:00.000-05:00 2014-12-04T21:21:47.040-05:00 Compiling to JavaScript. Lots of very interesting things are possible when everything (including the runtime system) is translated to LLVM IR. For example, compiling to JavaScript becomes trivial. Consider this ugly version of Hello World:<br />
<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: .1em .1em .1em .8em; border: solid gray; overflow: auto; padding: .2em .6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #888888;">{-# LANGUAGE MagicHash #-}</span>
<span style="color: #008800; font-weight: bold;">module</span> <span style="color: #0e84b5; font-weight: bold;">Main</span> (<span style="color: #0066bb; font-weight: bold;">main</span>) <span style="color: #008800; font-weight: bold;">where</span>
<span style="color: #008800; font-weight: bold;">import</span> <span style="color: #0e84b5; font-weight: bold;">LHC.Prim</span>
<span style="color: #0066bb; font-weight: bold;">putStrLn</span> <span style="color: black; font-weight: bold;">::</span> <span style="color: #333399; font-weight: bold;">List</span> <span style="color: #333399; font-weight: bold;">Char</span> <span style="color: black; font-weight: bold;">-></span> <span style="color: #333399; font-weight: bold;">IO</span> <span style="color: #333399; font-weight: bold;">Unit</span>
<span style="color: #0066bb; font-weight: bold;">putStrLn</span> msg <span style="color: black; font-weight: bold;">=</span> putStr msg `thenIO` putStr (unpackString<span style="color: #333333;">#</span> <span style="background-color: #fff0f0;">"</span><span style="background-color: #fff0f0; color: #666666; font-weight: bold;">\n</span><span style="background-color: #fff0f0;">"</span><span style="color: #333333;">#</span>)
<span style="color: #0066bb; font-weight: bold;">main</span> <span style="color: black; font-weight: bold;">::</span> <span style="color: #333399; font-weight: bold;">IO</span> <span style="color: #333399; font-weight: bold;">Unit</span>
<span style="color: #0066bb; font-weight: bold;">main</span> <span style="color: black; font-weight: bold;">=</span> putStrLn (unpackString<span style="color: #333333;">#</span> <span style="background-color: #fff0f0;">"Hello World!"</span><span style="color: #333333;">#</span>)
<span style="color: #0066bb; font-weight: bold;">entrypoint</span> <span style="color: black; font-weight: bold;">::</span> <span style="color: #333399; font-weight: bold;">Unit</span>
<span style="color: #0066bb; font-weight: bold;">entrypoint</span> <span style="color: black; font-weight: bold;">=</span> unsafePerformIO main
</pre>
</div>
<br />
Notice the 'List' and 'Unit' types, and the 'thenIO' and 'unpackString#' functions. There's no syntactic sugar in LHC yet. You can get everything sugar-free these days, even Haskell compilers.<br />
<br />
Running the code through the LLVM dynamic compiler gives us the expected output:<br />
<br />
<pre># lli Hello.ll
Hello World!
</pre>
<br />
Neato, we have a complete Haskell application as a single LLVM file. Now we can compile it to JavaScript without having to worry about the garbage collector or the RTS; Everything has been packed away in this self-contained file.<br />
<br />
<pre>$ emcc -O2 Hello.ll -o Hello.js # Compile to JavaScript using
# emscripten.
$ node Hello.js # Run our code with NodeJS.
Hello World!
$ ls -lh Hello.js # JavaScript isn't known to be
# terse but we're still smaller
# than HelloWorld compiled with GHC.
-rw-r--r-- 1 lemmih staff 177K Dec 4 23:33 Hello.js
</pre>
David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 1 tag:blogger.com,1999:blog-1862878851303132605.post-2804584237760729673 2014-11-28T01:14:00.000-05:00 2014-11-28T01:14:00.508-05:00 The New LHC. <h2>
What is LHC?
</h2>
The LLVM Haskell Compiler (LHC) is a newly reborn project to build a working <a href="https://www.haskell.org/definition/haskell2010.pdf">Haskell2010</a> compiler out of reusable blocks. The umbrella organisation for these blocks is the <a href="https://github.com/haskell-suite">haskell-suite</a>. The hope is that with enough code reuse, even the daunting task of writing a Haskell compiler becomes manageable.<br />
<br />
<h2>
Has it always been like that?</h2>
No, LHC got started as a fork of the <a href="https://repetae.net/computer/jhc/">JHC</a> compiler. A bit later, LHC was reimagined as a backend to the <a href="https://www.haskell.org/ghc/">GHC</a> compiler.<br />
<br />
<h2>
Can LHC compile my code?</h2>
LHC can only compile very simple programs for now. Stay tuned, though.<br />
<br />
<h2>
Where's development going next?</h2>
<ol>
<li>Better support for Haskell2010.</li>
<li>Reusable libraries for name resolution and type-checking.</li>
<li>Human-readable compiler output. With LLVM, optimisations are less important. We instead focus on generating pretty code.</li>
</ol>
David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 0 tag:blogger.com,1999:blog-1862878851303132605.post-5396066029136154583 2014-11-25T02:08:00.000-05:00 2014-11-25T02:08:07.710-05:00 Very minimal Hello World. <div class="tr_bq">
The LLVM Haskell Compiler finally coming together. From <b>Haskell parser</b> to <b>name resolution</b> to <b>type checker</b> to <b>desugarer</b> to <b>LLVM backend</b> to <b>GC</b>. Everything is held together with duct tape but it feels great to finally compile and run Hello World.</div>
<br />
<pre># cat Hello.hs
{-# LANGUAGE MagicHash #-}
module Main (main) where
import LHC.Prim
main :: IO Unit
main =
puts "Hello Haskell!"# `thenIO`
return Unit
entrypoint :: Unit
entrypoint = unsafePerformIO main
</pre>
<br />
Compiling the above file yields a single LLVM program, containing user code and the RTS.<br />
<br />
<pre># lli Hello.ll
Hello Haskell!
</pre>
David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 0 tag:blogger.com,1999:blog-1862878851303132605.post-8888684820432265267 2010-10-19T16:09:00.005-04:00 2010-10-19T18:30:02.791-04:00 Rough organizational overview. The exact details are constantly changing but here's a rough overview of the LHC pipeline.<div><ol><li>External Core.<br />We've designed our compiler to use GHC as its frontend. This means that GHC will handle the parsing and type-checking of the Haskell code in addition to some of the optimization (GHC particularly excels at high-level local optimizations). LHC benefits greatly by automatically supporting many of the Haskell extensions offered by GHC.<br />Notable characteristics: Non-strict, local functions, complex let-bindings. Pretty much just Haskell code with zero syntactic sugar.<br />Example snippet:<br /><pre><br /> base:Data.Either.$fShowEither :: ghc-prim:GHC.Types.Int =<br /> ghc-prim:GHC.Types.I# (11::ghc-prim:GHC.Prim.Int#);<br /></pre><br /></li><li>Simple Core.<br />Since External Core isn't immediately ready to be processed into GRIN code, we first translate it to Simple Core by removing or simplifying out a couple of features. The most noticeable feature of External Core is locally scoped functions which simply does not fit in with the GRIN model. When translating to Simple Core, we hoist out all local functions to the top-level.<br />Notable characteristics: Non-strict, no local functions, simplified let-bindings.</li><li>Grin Stage 1.<br />Let me start by introducing GRIN: GRIN (Graph Reduction Intermediate Notation) is a first order, strict, (somewhat) functional language.<br />The purpose of this first stage of grin code is to encode the laziness explicitly. It turns out that you can translate a lazy language (like Simple Core) to a strict language (like GRIN) using only two primitives: Eval and apply. The 'eval' primitives takes a closure, evaluates it if need be and returns the resulting object. The 'apply' primitives simply adds an argument to a closure. Haskell compilers such as GHC, JHC and UHC all use this model for implementing laziness.<br />Notable characteristics: Strict, explicit laziness, opaque closures.<br />Example snippet:<br /><pre><br />base:Foreign.C.Types.@lifted_exp@ w ws =<br /> do x2508 <- @eval ws<br /> case x2508 of<br /> (Cbase:GHC.Int.I32# x#)<br /> -> do x2510 <- unit 11<br /> base:GHC.Show.$wshowSignedInt x2510 x# w<br /></pre><br /></li><li>Grin Stage 2.<br />At the time of writing, each of the mentioned compilers stop at the previous stage (or at what would be their equivalent of that stage).[1] LHC follows in the footsteps of the original GRIN compiler and applies a global control-flow analysis to eliminate/inline all eval/apply primitives. In the end, a lazy/suspended function taking, say, two arguments simply becomes a data constructor with two fields.<br />Notable characteristics: Strict, transparent closures.<br />Example snippet:<br /><pre><br />base:Foreign.Marshal.Utils.toBool1_caf =<br /> do [x2422] <- constant 0<br /> [x2423] <- @realWorld#<br /> [x2424 x2425] <- (foreign lhc_mp_from_int) x2422 x2423<br /> [x2426] <- constant Cinteger-gmp:GHC.Integer.Type.Integer<br /> unit [x2426 x2425]<br /></pre><br /></li><li>Grin Stage 3.<br />Things are starting to get fairly low-level already at stage 2. However, stage 2 is still a bit too high-level for some optimizations to be easily implemented. Stage 3 breaks the code into smaller blocks that can easily be moved, inlined and short-circuited. The code is now sufficiently low-level that it can be pretty-printed as C.<br />Notable characteristics: Functions are broken down to functional units. Otherwise same as stage 2.<br />Example snippet:<br /><pre><br />base:GHC.IO.Encoding.Iconv.@lifted@_lvl60swYU38 rb3 rb4 =<br /> do [x21578] <- @-# rb4 rb3<br /> case x21578 of<br /> 0 -> constant Cghc-prim:GHC.Bool.False<br /> () -> constant Cghc-prim:GHC.Bool.True<br /></pre><br /></li><li>Grin--.<br />Grin-- is the latest addition to the heap and not much is known about it for certain. It is even up for debate whether it belongs to the GRIN family at all since it diverge from the SSA style.<br />The purpose of Grin-- is to provide a vessel for expressing stack operations.<br />Notable characteristics: Operates on global virtual registers, enables explicit stack management.<br />Example snippet:<br /><pre><br />base:GHC.IO.Encoding.Iconv.@lifted@_lvl60swYU38:<br /> do x21578 := -# rb4 rb3<br /> case x21578 of<br /> 0 -> do x88175 := Cghc-prim:GHC.Bool.False<br /> ret<br /> () -> do x88175 := Cghc-prim:GHC.Bool.True<br /> ret<br /></pre><br /></li></ol><div><br /></div></div><div>Feel free to ask if you have any questions on the how and why of LHC.</div><div><br /></div><div>[1] UHC does have the mechanics for lowering the eval/apply primitives but it is not enabled by default.</div><div><br /></div> David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 2 tag:blogger.com,1999:blog-1862878851303132605.post-1818382839170393764 2010-10-16T13:13:00.004-04:00 2010-10-18T08:41:57.500-04:00 Accurate garbage collection. So, let's talk about garbage collection. Garbage collection is a very interesting topic because it is exceedingly simple in theory but very difficult in practice.<br /><br />To support garbage collection, the key thing a language implementor has to do is to provide a way for the GC to find all live heap pointers (called root pointers). This sounds fairly easy to do but can get quite complicated in the presence of aggressive optimizations and register allocation. A tempting (and often used) solution would be to break encapsulation and make the optimizations aware of the GC requirements. This of course becomes harder the more advanced the optimizations are and with LHC it is pretty much impossible. Consider the following GRIN code:<br /><pre><br />-- 'otherFunction' returns an object of type 'Maybe Int' using two virtual registers.<br />-- If 'x' is 'Nothing' then 'y' is undefined.<br />-- If 'x' is 'Just' then 'y' is a root pointer.<br />someFunction<br /> = do x, y <- otherFunction; .... </pre><br />The above function illustrates that it is not always straightforward to figure out if a variable contains a root pointer. Sometimes determining that requires looking at other variables.<br /><br />So how might we get around this hurdle, you might ask. Well, if the code for marking roots resides in user-code instead of in the RTS then it can be as complex as it needs be. This fits well with the GRIN ideology of expressing an much in user-code as possible.<div><br /></div><div>Now that we're familiar with the problem and the general concept of the solution, let's work out some of the details. Here's what happens when a GC event is triggered, described algorithmically:</div><div><ol><li>Save registers to memory.<br />This is to avoid clobbering the registers and to make them accessible from the GC code.</li><li>Save stack pointer.<br /></li><li>Initiate temporary stack.<br />Local variables from the GC code will be placed on this stack.</li><li>Jump to code for marking root pointers.<br />This will peel back each stack frame until the bottom of the call graph has been reached.</li><li>Discard temporary stack.</li><li>Restore stack pointer</li><li>Restore registers.<br /></li></ol><div>Using the approach for exception involves stack cutting and a more advanced transfer of control which will be discussed in a later post.</div></div><div><br /></div><div>In conclusion, these are the advantages native-code stack walking:</div><div><ul><li>Allows for objects to span registers as well as stack slots.</li><li>Separates the concerns of the optimizer, the garbage collector and the code generator.</li><li>Might be a little bit faster than dynamic stack walking since the stack layout is statically encoded.</li></ul></div> David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 0 tag:blogger.com,1999:blog-1862878851303132605.post-6282255810574675296 2010-10-16T11:55:00.005-04:00 2010-10-16T14:40:11.621-04:00 A few updates. Not much has been put up on this blog lately but work is still going on under the hood. The most significant changes in the pipeline are proper tail-calls and a copying garbage collector.<br /><br />As it stands now, LHC uses the C stack extensively but this is obviously not ideal as it makes garbage collection, exceptions and tail-calls nigh impossible to implement. Since the ideal solution of using a third party target language isn't available (neither LLVM or C-- supports arbitrary object models), I've decided to slowly inch closer to a native code generator for LHC. It is fortunate that I find <a href="https://code.haskell.org/lhc/papers/Automatically%20Generating%20the%20Backend%20of%20a%20Compiler%20Using%20Declarative%20Machine%20Descriptions.pdf">Joao Dias' dissertation</a> nearly as interesting as the <a href="https://code.haskell.org/lhc/papers/Code%20Optimization%20Techniques%20for%20Lazy%20Functional%20Languages.pdf">GRIN paper</a>.<br /><br />The first step would be to make the stack layout explicit in the GRIN code. This is necessary but not sufficient for tail-calls (some register coalescing is also required. More on this later). More importantly, accurate garbage collection now becomes a possibility. The way I want to implement garbage collection (and exceptions for that matter) is through alternative return points. This is one of three methods discussed in a <a href="https://research.microsoft.com/en-us/um/people/simonpj/papers/c--/c--exn.htm">C-- paper by Norman Ramsey and Simon Peyton Jones</a> for implementing <span style="font-style: italic;">exceptions</span>. I believe this method is versatile enough for garbage collection as well.<br /><br />The concept revolves around using specialized code at each call site that knows enough about the stack layout to mark root pointers and to jump to the next stack frame. I will describe the details in another blog post. An interesting point is that the garbage collectors could be written in user-code instead of in the RTS.<br /><br />So, to recap: Accurate garbage collection is just around the corner and proper tail-calls will follow in its heels. These two missing features are the reason that so many of the benchmarks fail to run for LHC. David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 0 tag:blogger.com,1999:blog-1862878851303132605.post-5653666825112978620 2010-07-25T17:09:00.002-04:00 2010-07-25T17:19:31.715-04:00 The Great Haskell Compiler shootout David has been working very hard recently on improving LHC and the nobench benchmark suite, in order to actively benchmark LHC against several other haskell compilers: GHC, JHC and UHC.<br /><br />I figured I'd pop in and announce the benchmarks now that the latest HEAD version of LHC can compile many of them and we can get some meaningful numbers. You can see the results <a href="https://mirror.seize.it/report.html">here</a>. Note there are two columns for JHC: one using john's new garbage collector (via `-fjgc`) and one without, because naturally it tends to affect performance characteristics quite a bit. Some of the numbers are quite interesting: in particular, JHC is nearly 400x faster(!) than GHC on the atom benchmark, and UHC dominates the calendar benchmark currently (we haven't investigated why GHC isn't winning here.) There are also some interesting variations in JHC, where the garbage collector wins in some cases, loses in others, but makes the difference in some between actually running and running out of memory (as you would expect.)<br /><br />Perhaps somewhat regrettably, LHC loses in every benchmark. But we're only just beginning to implement real optimizations now that things are somewhat stabilized and the interface isn't arcane - prior to this we had very few optimizations outside of dead code elimination, simplification and a bit of inlining. David's begun implementing some more exotic optimizations recently, so it'll be interesting to see the results soon.<br /><br />Now we'll be able to more actively see the progress of optimizing haskell compilers. Should be fun! Austin Seipp https://www.blogger.com/profile/08003235138924772402 noreply@blogger.com 3 tag:blogger.com,1999:blog-1862878851303132605.post-1840175132892196801 2010-06-11T11:22:00.004-04:00 2010-06-11T11:26:20.445-04:00 Mirroring Boquists' GRIN papers Urban Boquists PhD thesis, Code Optimization Techniques for Lazy Functional Languages, and its accompanying <a href="https://www.cs.chalmers.se/%7Eboquist/">website</a> have gone offline. Because these papers describe GRIN, the intermediate language we use in LHC, I have decided to mirror Boquist's 3 GRIN-related papers on my own server <a href="https://0xff.ath.cx/%7Eas/code/lhc/papers/">here</a>.<br /><br />If someone inside Chalmers could explain where his papers moved and if we could get them back online, that would be great! Austin Seipp https://www.blogger.com/profile/08003235138924772402 noreply@blogger.com 1 tag:blogger.com,1999:blog-1862878851303132605.post-1935842952859753459 2010-05-29T12:16:00.004-04:00 2010-06-02T14:39:59.344-04:00 The new interface After hacking away for a little bit, I've finally gotten the new user interface for LHC working!<br /><pre><code><br />a ~/code/lhc/test $ lhc --help<br />The LHC Haskell Compiler, v0.11, (C) 2009-2010 David Himmelstrup, Austin Seipp<br /><br />lhc [FLAG] [FILE]<br />Compile Haskell code<br /><br />-? --help[=FORMAT] Show usage information (optional format)<br />-V --version Show version information<br />-v --verbose Higher verbosity<br />-q --quiet Lower verbosity<br />--llvm Use LLVM backend<br />--ghc-opts=VALUE Give GHC frontend options<br />-i --install-library Don't compile; install modules under a library<br />-b --build-library Used when compiling a library (cabal only)<br />-O =VALUE Set optimization level (default=1)<br />--numeric-version Raw numeric version output<br />--supported-languages List supported LANGUAGE pragmas<br />-c Do not link, only compile<br />-o =VALUE output file for binary (default=a.out)<br />--src-dir=VALUE source code directory<br />a ~/code/lhc/test $ lhc HelloWorld.hs<br />[1 of 1] Compiling Main ( HelloWorld.hs, HelloWorld.o )<br />.....................<br />Found fixpoint in 7 iterations.<br />Lowering apply primitives... done in 0.09s<br />Heap points-to analysis... ...........................done in 0.95s<br />HPT fixpoint found in 27 iterations.<br />..................................................................................<br />Found fixpoint in 11 iterations.<br />Compiling C code... done in 0.11s<br />a ~/code/lhc/test $ ./HelloWorld<br />Hello, world!<br />a ~/code/lhc/test $<br /></code></pre><br />The changes should be landing shortly. It will require a patch to Cabal. There is also a bug in cabal/cmdargs that I have not yet tracked down which makes installing cabal packages difficult, although still possible, with this new scheme.<br /><br /><span style="font-weight: bold;">Edit 6-2-2010</span>: all of the necessary patches have been pushed to both LHC and Cabal to make the new user interface work. Try it out (install using 'cabal install -fwith-libs' provided you have the darcs HEAD version of Cabal,) and tell us of any corner cases on IRC (#lhc-compiler on freenode)! Austin Seipp https://www.blogger.com/profile/08003235138924772402 noreply@blogger.com 1 tag:blogger.com,1999:blog-1862878851303132605.post-4927659293763305740 2010-05-27T11:36:00.008-04:00 2010-05-27T12:05:02.389-04:00 A new user interface for LHC The current user interface for LHC is pretty unwieldy - it requires you to invoke lhc twice: once to generate an external core file, and another to generate the executable with LHC itself.<br /><br />There are a couple of problems with this:<br /><ol><li>It requires -you- to keep track of the generated .hcr files, which is a PITA.<br /></li><li>It makes the test suite complicated, as we currently our own regression tool to handle things like #1. I would like to use Simon Michael's excellent <a href="https://hackage.haskell.org/package/shelltestrunner">shelltestrunner</a> library, but the two-step compilation process would make the test files nastier than they would be, and it so we currently maintain our own regression tool.</li><li>It made some of LHC's code very gross: we basically copied GHC's "Main.hs" file and stuck it in our source tree with some modifications, because we need to be able to accept all GHC options, even "insert arbitrary ghc option here" (for general usage, and cabal install support.) This was - as you could guess, incredibly fragile in terms of maintenance and fowards/backwards compatibility. </insert></li></ol>So now I've devised a new approach. We will instead run GHC in the background, twice: the first, we will call GHC to compile your code with your provided options, and we will generally always stick something like '--make -fext-core -c' onto your command line to generate external core. The second time, we will call GHC <span style="font-style: italic;">again</span>, but instead we will call ghc with the '-M' command line flag. This flag calls GHC to generate a Makefile that describes the dependency information between modules. Running it on Tom Hawkin's <a href="https://hackage.haskell.org/package/atom">atom</a> project, you get something like this:<br /><br /><pre><code># DO NOT DELETE: Beginning of Haskell dependencies<br />Language/Atom/Expressions.o : Language/Atom/Expressions.hs<br />Language/Atom/Elaboration.o : Language/Atom/Elaboration.hs<br />Language/Atom/Elaboration.o : Language/Atom/Expressions.hi<br />Language/Atom/Analysis.o : Language/Atom/Analysis.hs<br />Language/Atom/Analysis.o : Language/Atom/Expressions.hi<br />Language/Atom/Analysis.o : Language/Atom/Elaboration.hi<br />Language/Atom/Scheduling.o : Language/Atom/Scheduling.hs<br />Language/Atom/Scheduling.o : Language/Atom/Elaboration.hi<br />Language/Atom/Scheduling.o : Language/Atom/Analysis.hi<br />Language/Atom/Language.o : Language/Atom/Language.hs<br />Language/Atom/Language.o : Language/Atom/Expressions.hi<br />Language/Atom/Language.o : Language/Atom/Elaboration.hi<br />Language/Atom/Language.o : Language/Atom/Elaboration.hi<br />Language/Atom/Common.o : Language/Atom/Common.hs<br />Language/Atom/Common.o : Language/Atom/Language.hi<br />Language/Atom/Code.o : Language/Atom/Code.hs<br />Language/Atom/Code.o : Language/Atom/Scheduling.hi<br />Language/Atom/Code.o : Language/Atom/Expressions.hi<br />Language/Atom/Code.o : Language/Atom/Elaboration.hi<br />Language/Atom/Code.o : Language/Atom/Analysis.hi<br />Language/Atom/Compile.o : Language/Atom/Compile.hs<br />Language/Atom/Compile.o : Language/Atom/Language.hi<br />Language/Atom/Compile.o : Language/Atom/Elaboration.hi<br />Language/Atom/Compile.o : Language/Atom/Scheduling.hi<br />Language/Atom/Compile.o : Language/Atom/Code.hi<br />Language/Atom.o : Language/Atom.hs<br />Language/Atom.o : Language/Atom/Language.hi<br />Language/Atom.o : Language/Atom/Common.hi<br />Language/Atom.o : Language/Atom/Compile.hi<br />Language/Atom.o : Language/Atom/Code.hi<br /># DO NOT DELETE: End of Haskell dependencies<br /></code></pre><br />This tells us the location of where all the generated object files are. GHC will put external core files next to these other object files (in all cases, as you cannot redirect the output location of external core files.) So we can just parse this simple Makefile, remove duplicates, and substitute '.o' files for '.hcr' files. LHC takes care of the rest.<br /><br />This is of course in the event you want to compile an executable. If you want to compile a library, it's mostly the same, only when we parse the files we just store them for later.<br /><br />But what about "obscure ghc option"? No fear! We'll just provide something like a --ghc-options flag which will get passed onto GHC's invocations. LHC can then have it's own, more general command line interface to control various options in the whole-program stages (on this note, Neil Mitchell's <a href="https://hackage.haskell.org/package/cmdargs">cmdargs</a> library is amazing for this stuff!)<br /><br />For default options to GHC, I think we should perhaps stick to the Haskell 2010 standard - that is, by default, LHC will run GHC with language options to enable compilation of compliant Haskell 2010 code without any OPTIONS_GHC or LANGUAGE pragmas. Optimization levels for GHC can be implied by LHC's supplied optimization level or explicitly via --ghc-options.<br /><br />Comments are always welcome.</obscure> Austin Seipp https://www.blogger.com/profile/08003235138924772402 noreply@blogger.com 0 tag:blogger.com,1999:blog-1862878851303132605.post-4784074898598194083 2010-05-24T09:54:00.003-04:00 2010-05-24T16:33:42.836-04:00 Limited release. This release of lhc-0.10 marks the move to GHC-6.12 and hopefully a more stable build infrastructure. As it stands, lhc-0.10 still lacks support for several important features, such as floating point values and large parts of the FFI.<br /><br />To install LHC you need the development versions of Cabal and cabal-install. They can be fetched from these darcs repositories:<br /><pre><br /> darcs get --lazy https://darcs.haskell.org/cabal<br /> darcs get --lazy https://darcs.haskell.org/cabal-install<br /></pre><br />Once you've installed both Cabal and cabal-install, lhc-0.10 can be installed with the following command:<br /><pre><br /> cabal install lhc-0.10<br /></pre><br /><br />Here's how to use LHC once it has been successfully installed:<br /><pre><br /> lhc -c SourceFile.hs # This compiles SourceFile.hs to SourceFile.hcr<br /> lhc compile SourceFile.hcr # This compiles SourceFile.hcr to the executable SourceFile.<br /> ./SourceFile<br /></pre><br /><br />Happy Hacking. David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 0 tag:blogger.com,1999:blog-1862878851303132605.post-9078585081764514791 2010-05-18T12:48:00.003-04:00 2010-05-18T20:39:22.798-04:00 Laziness and polymorphism. This may be obvious to some but I truly didn't grok the relationship between laziness and polymorphism before I started work on LHC.<br /><br />The Haskell language has two very distinguishing features: Laziness and parametric polymorphism. At a glance, these two features may not seem to have that much in common. However, laziness can be seen as a form of implicit polymorphism (and it tends to be implemented as such). Consider the function with the following type signature:<br /><blockquote><br />f :: Integer -> Integer<br /></blockquote><br />One could say this function is polymorphic in the first argument: The argument can either be an actual Integer or it can be something that <span style="font-style:italic;">evaluates</span> to an Integer. When we look at laziness as a form of polymorphism, it becomes clear that eliminating polymorphism will also eliminate laziness.<br />This is largely irrelevant for the average Jane Doe hacker. But if you're working on optimizations aimed at improving the time or space characteristics by eliminating "unwanted" polymorphism, it becomes important to keep laziness in mind. The hint here is for adaptive containers.<br /><br />Well, to make a short story even shorter: Laziness and polymorphism are different sides of the same coin. If you optimize away polymorphism, you will (perhaps inadvertently) also squash laziness.<br /><br />All this is obvious in retrospect but I didn't get it until it was right in front of me. David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 2 tag:blogger.com,1999:blog-1862878851303132605.post-973608889421611230 2009-09-04T17:16:00.005-04:00 2009-09-04T17:36:29.609-04:00 Yet another unfair benchmark. A lot of things has happened in LHC over the last couple of weeks. With the inclusion of Integer and IEEE float support, LHC is finally usable enough for simple benchmarks.<br /><br />I've excavated the old 'nobench' benchmark and pitched four Haskell implementation up against each other. It should be noted that these benchmark numbers are even more unreliable than usual. UHC's C backend doesn't work on x84-64 and thus it compiles to bytecode. All in all, you should trust benchmarks as much as you trust politicians.<br /><br />The benchmark results can be found here: <a href="https://darcs.haskell.org/%7Elemmih/nobench/x86_64/results.html">https://darcs.haskell.org/~lemmih/nobench/x86_64/results.html</a>.<br /><br />The results are updated frequently.<br /><br />The benchmark source can be found here: <a href="https://nobench.seize.it/">https://nobench.seize.it/</a> David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 10 tag:blogger.com,1999:blog-1862878851303132605.post-7301716985813186706 2009-08-15T08:19:00.002-04:00 2009-08-15T08:36:32.674-04:00 Status update: New Integer implementation. We've finally gotten around to replacing our Integer type with a real bignum implementation. The bignum code was written by Isaac Dupree in wonderfully pure Haskell, and it was a snug fit for our needs. After stripping the Prelude dependency and hooking it up to the Integer interface, it worked without a hitch.<br /><br />Let's try it out:<br /><br /><pre><br />david@desktop:lhc$ cat HelloWorld.hs<br />module Main where<br />main = do print (2^150)<br /> print (3*10^13*299792458)<br />david@desktop:lhc$ ./HelloWorld<br />1427247692705959881058285969449495136382746624<br />8993773740000000000000<br /></pre> David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 0 tag:blogger.com,1999:blog-1862878851303132605.post-1706187924862393537 2009-06-08T12:19:00.004-04:00 2009-06-08T14:04:31.107-04:00 New backend. The new C backend has been pushed to the repository and it seems to work without a hitch. No particular effort has been directed at making it efficient (and none will since this backend is only a temporary measure). Initial testing shows it to be around 40-50 times faster than the interpreter.<br />Writing this backend was surprisingly easy; low-level Grin (LHC's intermediate language) can be directly pretty-printing as C code. By far the hardest part was giving up on LLVM and settling for C.<br /><br />Future development will focus on grin-to-grin optimizations and a native code generator. David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 4 tag:blogger.com,1999:blog-1862878851303132605.post-2783880837539685229 2009-05-21T03:25:00.003-04:00 2009-05-21T04:30:20.952-04:00 New release: LHC 0.8 It's been about 5 months but, finally, a new release of LHC has been born and is on hackage - so you should get it now!<br /><br />This new release has been a lot of hard work on behalf of David especially, and we've spent the past day or two working out a lot of installation issues on my MacBook etc.. But the result is looking really nice, even if premature. There are still some bugs to work out, but for the most part all our installation issues are fixed, and development can steam ahead on more interesting stuff.<br /><br />Perhaps the biggest change in this release is that LHC is now a backend for GHC instead of its own codebase. Amongst other things, this pretty much means that LHC already has support for all of GHC's language extensions. Also, it shares the exact same command line options (and a few more,) so it's pretty similar to GHC on the hood.<br /><br />The code base is very small, but it is simple: there is no garbage collection or exceptions, threading etc.. Everything is slow right now and the heap etc. are dummy. The result already works well though, and so we're releasing it now for your pleasure.<br /><br />There are full installation instructions for LHC + libraries <a href="https://lhc.seize.it/Front+Page#installing">HERE</a>.<br /><br />Enjoy. Austin Seipp https://www.blogger.com/profile/08003235138924772402 noreply@blogger.com 3 tag:blogger.com,1999:blog-1862878851303132605.post-2882631637200046724 2009-05-04T12:57:00.009-04:00 2009-05-05T14:00:43.545-04:00 Constructor specialization and laziness. Edward recently publicised some experiments with constructor specialization and the state monad. You can find the sources <a href="https://www.reddit.com/r/haskell/comments/8hbgu/an_adaptive_state_monad_40_faster_than_our_best/">here</a>.<br /><br />What he did was basically to remove laziness and polymorphism from the state monad using a fancy new GHC feature called <a href="https://www.haskell.org/haskellwiki/GHC/Type_families">indexed type families</a>. Benchmarking the different implementation was done by calculating the Fibonacci sequence and printing the 400,000th element.<br /><br />There are quite a number of such adaptive data types. They range from lists to maps to monads but they all share two fundamental drawbacks: (1) All usage combinations must be explicitly enumerated, (2) laziness must be eliminated. Fortunately for LHC, using whole-program optimization solve both problems (by static analysis and unboxing at a higher granularity).<br /><br />I believe it's important to realise that polymorphism and laziness are two sides of the same coin. Destroy one and you are likely to inadvertently destroy the other. 'Is this a bad thing?' you might ask. Well, the short answer is "yes!". Laziness, when used correctly, is incredibly powerful. Let's have another look at the State monad benchmark.<br /><br />The following program implements the above mentioned benchmark. It is 10 times faster and uses 50 times less memory than the most efficient strict version.<br /><pre><br />{-# OPTIONS_GHC -O2 -XBangPatterns #-}<br />import Control.Monad<br />import Control.Monad.State<br /><br />main = print . last . fst $ fib 400000<br /><br />fib n = unS (fibN n) (0,1::Int)<br />fibN n = replicateM' n oneFib<br />oneFib = (get >>= \(m,n) -> put (n,m+n) >> return m)<br /><br />replicateM' 0 fn = return []<br />replicateM' n fn<br /> = do !e ← fn<br /> r ← replicateM' (n−1 ∷ Int) fn<br /> return (e:r)<br /></pre><br /><br />So, in conclusion: Constructor specialization is an interesting technique but its full power can only be realised as an interprocedural optimization pass. David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 1 tag:blogger.com,1999:blog-1862878851303132605.post-2373178144033592746 2009-04-10T13:39:00.004-04:00 2009-04-11T08:55:54.186-04:00 A new beginning. The LHC project has finally resumed development after a few weeks of inactivity. Things have taken big steps in a new direction, however, and nearly everything except the name has changed.<br />We're no longer a fork of JHC. Maintaining a complete Haskell front-end was too much of a hassle, especially considering we're only interested in optimization on the GRIN level. For this reason, LHC has reinvented itself as an alternative backend to the Glorious Glasgow Haskell Compiler.<br /><br />The lack of testability was a major problem in the previous version of LHC but hopefully we've learned from our mistakes. The new development efforts will be structured around a decremental reliance on a GRIN evaluator. In other words, we want to run the full testsuite between each and every significant code transformation. That no transformation should change the external behaviour of a GRIN program is a very simple invariant.<br /><br />The current toolchain looks as following:<br /><pre><br />david@desktop:basic$ cat Args.hs<br />import System<br /><br />main :: IO ()<br />main = do<br /> as <- getArgs<br /> mapM_ putStrLn as<br />david@desktop:basic$ ghc -fforce-recomp -O2 -fext-core -c Args.hs <br />david@desktop:basic$ lhc compile Args.hcr > Args.lhc<br />david@desktop:basic$ ./Args.lhc One Two Three<br />One<br />Two<br />Three<br /></pre><br /><br />The contents of 'Args.lhc' is unoptimized GRIN code. It is not by any means efficient or fast but it serves its purpose.<br /><br />Development will now focus on creating GRIN transformations that reduces the need for the RTS (our GRIN evaluator serves as the RTS). David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 8 tag:blogger.com,1999:blog-1862878851303132605.post-3716731184265791110 2009-04-08T10:56:00.002-04:00 2009-04-08T11:04:01.156-04:00 Hello world! After weeks of development, lhc is finally able to interpret Hello World!<br /><pre><br />david@desktop:lhc$ cat HelloWorld.hs<br />module Main where<br />main = putStr "Hello world\n"<br />david@desktop:lhc$ ghc -O2 -fext-core HelloWorld.hs -c<br />david@desktop:lhc$ lhc build HelloWorld.hcr > HelloWorld.grin<br />Parsing core files...<br />Tracking core dependencies...<br />Translating to grin...<br />Removing dead code...<br />Printing grin...<br />david@desktop:lhc$ wc -l HelloWorld.grin<br />8054 HelloWorld.grin<br />david@desktop:lhc$ lhc eval HelloWorld.hcr <br />Parsing core files...<br />Tracking core dependencies...<br />Translating to grin...<br />Removing dead code...<br />Hello world<br />Node (Aliased 251 "ghc-prim:GHC.Prim.(#,#)") (ConstructorNode 0) [Empty,HeapPointer 263]<br /></pre><br /><br />Supported primitives include: indexCharOffAddr#, newPinnedByteArray#, *MutVar, *MVar.<br /><br />Exceptions are currently ignored and the heap is never garbage collected. However, since I'm evaluating the GRIN (as opposed to translating it to LLVM or C), adding these features should be easy as cake. David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 2 tag:blogger.com,1999:blog-1862878851303132605.post-6719415247236128207 2009-03-22T13:48:00.000-04:00 2009-03-22T13:48:00.224-04:00 Ease of implementation. Developing a usable compiler for a high-level language such as Haskell isn't a trivial thing to do. Any effort to trade developer time against CPU time is likely to be a wise choice. In this post I will outline a few attempts to deal with the complexity of LHC in high-level ways. Hopefully the end result won't be too slow.<br /><br /><br /><span style="font-weight: bold;">Case short-circuiting.</span><br />Since case expressions in GRIN do not force the evaluation of the scrutinized value, they are usually preceded by a call to 'eval'. Then, after the 'eval' calls have been inlined, case-of-case patterns like this are very common:<br /><pre><br />do val <- case x of<br /> [] -> unit []<br /> CCons x xs -> unit (CCons x xs)<br /> Ffunc a -> func a<br />case val of<br /> [] -> jumpToNilCase<br /> CCons x xs -> jumpToConsCase x xs<br /></pre><br />This is obviously inefficient since the case for Nil and Cons will be scrutinized twice. In the GRIN paper, Boquist deals with this by implementing a case short-circuiting optimization after the GRIN code has been translated to machine code. However, dealing with optimizations on the machine code level is quite a tricky thing to do and I'd much rather implement this optimization in GRIN. By making aggressive use of small functions we can do exactly that:<br /><pre><br />do case x of<br /> [] -> jumpToNilCase<br /> CCons x xs -> jumpToConsCase x xs<br /> Ffunc a -> do val <- func a; checkCase val<br /><br />checkCase val =<br /> case val of<br /> [] -> jumpToNilCase<br /> CCons x xs -> jumpToConsCase x xs<br /></pre><br /><br /><br /><span style="font-weight: bold;">Register allocation and tail calls.</span><br />Using a fixed calling convention is not necessary for whole-program compilers like LHC. Instead, we choose to create a new calling method for each procedure (this is easier than it sounds).<br />This has the obvious consequence of requiring the convention for return values to be identical for procedures that invoke each other with tail-calls. This was deemed an unacceptable restriction in the GRIN paper, and all tail-calls were subsequently removed before register allocation took place. Afterwards, another optimization step reintroduced tail-calls where possible.<br />I believe this is too much trouble for too little gain. The possible performance hit is out-weighed by the ease of implementation and the guarantee of tail-calls.<br /><br /><br /><span style="font-weight: bold;">Simple node layout.</span><br />An unevaluated value is represented simply by a function name (or tag) and a fixed number of arguments. This value is then overwritten once it has been evaluated. However, the new value may be bigger than what was allocated to represent the unevaluated function.<br />One way to deal with this is to have two different node layouts: a fixed size node for small values, and a variable size node for big values. This is the approach taken in the GRIN paper and it understandably adds quite a bit of complexity.<br />Another method is to use indirections. This trades smaller average node size and ease of implementation against more heap allocations. David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 1 tag:blogger.com,1999:blog-1862878851303132605.post-7434231044024426374 2009-02-05T14:00:00.000-05:00 2009-02-05T15:38:38.247-05:00 Grin a little. It has come to my attention that we are not using GRIN to it fullest. More specifically, it seems that the 'eval' and 'update' operations are handled by the RTS. This has unfortunate consequences for both the optimizer and the backend code.<br />Without an explicit control-flow graph (given by inlining eval/apply), many of our more important transformations cannot be performed. Even worse than the lost optimization opportunities is the increased complexity of the RTS. Dealing with issues of correctness is an annoying distraction from the more enjoyable endeavour of applying optimizations.<br /><br />Moving away from the magical implementation of 'update' means we have to starting thinking about our memory model. The GRIN paper suggests using a fixed node size with a tail pointer for additional space if necessary. With this scheme we can update common-case nodes without allocating more heap space. However, since we're most likely to encounter complications with respect to concurrency and certain forms of garbage collection, I think a simpler approach is more apt.<br />Replacing nodes with indirections is very easy to implement, it doesn't clash with any optimizations (the original GRIN approach interfere with fetch movement), and it opens the door for advanced features such as concurrency.<br /><br />So this is what I'll be working on in the near future. All magic has to be purged from the kingdom so logic and reason can reign supreme. David Himmelstrup https://www.blogger.com/profile/12982136700651117492 noreply@blogger.com 8