https://github.com/Kindelia/HVM Skip to content Sign up * Why GitHub? + Features + Mobile + Actions + Codespaces + Packages + Security + Code review + Issues + Integrations + GitHub Sponsors + Customer stories * Team * Enterprise * Explore + Explore GitHub + Learn and contribute + Topics + Collections + Trending + Learning Lab + Open source guides + Connect with others + The ReadME Project + Events + Community forum + GitHub Education + GitHub Stars program * Marketplace * Pricing + Plans + Compare plans + Contact Sales + Education [ ] * # In this repository All GitHub | Jump to | * No suggested jump to results * # In this repository All GitHub | Jump to | * # In this organization All GitHub | Jump to | * # In this repository All GitHub | Jump to | Sign in Sign up {{ message }} Kindelia / HVM Public * Notifications * Fork 56 * Star 2.4k * A massively parallel, optimal functional runtime in Rust MIT License 2.4k stars 56 forks Star Notifications * Code * Issues 14 * Pull requests 4 * Discussions * Actions * Projects 0 * Wiki * Security * Insights More * Code * Issues * Pull requests * Discussions * Actions * Projects * Wiki * Security * Insights This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. master Switch branches/tags [ ] Branches Tags Could not load branches Nothing to show {{ refName }} default View all branches Could not load tags Nothing to show {{ refName }} default View all tags 7 branches 0 tags Code Latest commit @VictorTaelin VictorTaelin Update HOW.md ... f3e0b97 Feb 5, 2022 Update HOW.md f3e0b97 Git stats * 377 commits Files Permalink Failed to load latest commit information. Type Name Latest commit message Commit time .github/workflows Bunch of changes. Jan 31, 2022 .vscode base lib code Feb 1, 2022 bench improve and fix ./bench/run.js Feb 1, 2022 src Add prototype notice Feb 5, 2022 .gitignore Bunch of changes. Jan 31, 2022 Cargo.lock Temporary fix for the asymptotic issue on #60 Feb 3, 2022 Cargo.nix Cleanups. Feb 1, 2022 Cargo.toml Add prototype notice Feb 5, 2022 HOW.md Update HOW.md Feb 5, 2022 LICENSE Add LICENSE Jan 24, 2022 NIX.md git clone with HTTP instead of SSH on tutorial Jan 28, 2022 README.md Title isn't needed Feb 5, 2022 default.nix Add Nix build files. Jan 27, 2022 flake.lock Bunch of changes. Jan 31, 2022 flake.nix Bunch of changes. Jan 31, 2022 prep.sh Add prep script. Feb 2, 2022 rustfmt.toml Merge https://github.com/Kindelia/LambdaVM-Rust into jia/refactor Jan 19, 2022 shell.nix Add Nix build files. Jan 27, 2022 View code [ ] High-order Virtual Machine (HVM) Usage 1. Install it 2. Create an HVM file 3. Run and compile Benchmarks List Fold (Sequential) Tree Sum (Parallel) QuickSort (Parallel) Composition (Optimal) Lambda Arithmetic (Optimal) How is that possible? How can I help? Community README.md High-order Virtual Machine (HVM) High-order Virtual Machine (HVM) is a pure functional compile target that is lazy, non-garbage-collected and massively parallel. It is also beta-optimal, meaning that, in several cases, it can be exponentially faster than most functional runtimes, including Haskell's GHC. That is possible due to a new model of computation, the Interaction Net, which combines the Turing Machine with the Lambda Calculus. Previous implementations of this model have been inefficient in practice, however, a recent breakthrough has drastically improved its efficiency, giving birth to the HVM. Despite being a prototype, it already beats mature compilers in many cases, and is set to scale towards uncharted levels of performance. Welcome to the inevitable parallel, functional future of computers! Usage 1. Install it First, install Rust. Then, type: cargo install hvm 2. Create an HVM file HVM files look like untyped Haskell. Save the file below as main.hvm: // Creates a tree with `2^n` elements (Gen 0) = (Leaf 1) (Gen n) = (Node (Gen(- n 1)) (Gen(- n 1))) // Adds all elements of a tree (Sum (Leaf x)) = x (Sum (Node a b)) = (+ (Sum a) (Sum b)) // Performs 2^n additions in parallel (Main n) = (Sum (Gen n)) The program above creates a perfect binary tree with 2^n elements and adds them up. Since it is recursive, HVM will parallelize it automatically. 3. Run and compile hvm r main 10 # runs it with n=10 hvm c main # compiles HVM to C clang -O2 main.c -o main -lpthread # compiles C to BIN ./main 30 # runs it with n=30 The program above runs in about 6.4 seconds in a modern 8-core processor, while the identical Haskell code takes about 19.2 seconds in the same machine with GHC. This is HVM: write a functional program, get a parallel C runtime. And that's just the tip of iceberg! See Nix usage documentation here. Benchmarks HVM has two main advantages over GHC: automatic parallelism and beta-optimality. I've selected 5 common micro-benchmarks to compare them. Keep in mind that HVM is still an early prototype, so it obviously won't beat GHC in general, but it does quite well already and should improve steadily as optimizations are implemented. Tests were compiled with ghc -O2 for Haskell and clang -O2 for HVM, on an 8-core M1 Max processor. The complete files to replicate these results are in the /bench directory. List Fold (Sequential) main.hvm main.hs // Folds over a list -- Folds over a list (Fold Nil c n) = n fold Nil c n = n (Fold (Cons x xs) c n) = (c x (Fold xs c n)) fold (Cons x xs) c n = c x (fold xs c n) // A list from 0 to n -- A list from 0 to n (Range 0 xs) = xs range 0 xs = xs (Range n xs) = range n xs = let m = (- n 1) let m = n - 1 (Range m (Cons m xs)) in range m (Cons m xs) // Sums a big list with fold -- Sums a big list with fold (Main n) = main = do let size = (* n 1000000) n <- read.head <$> getArgs :: IO Word32 let list = (Range size Nil) let size = 1000000 * n (Fold list lalb(+ a b) 0) let list = range size Nil print $ fold list (+) 0 [ListFold] [ *the lower the better ] In this micro-benchmark, we just build a huge list of numbers, and fold over it to sum them. Since lists are sequential, and since there are no higher-order lambdas, HVM doesn't have any technical advantage over GHC. As such, both runtimes perform very similarly. Tree Sum (Parallel) main.hvm main.hs -- Creates a tree with 2^n elements // Creates a tree with `2^n` elements gen 0 = Leaf 1 (Gen 0) = (Leaf 1) gen n = Node (gen(n - 1)) (gen(n - 1)) (Gen n) = (Node (Gen(- n 1)) (Gen(- n 1))) -- Adds all elements of a tree // Adds all elemements of a tree sun (Leaf x) = 1 (Sum (Leaf x)) = x sun (Node a b) = sun a + sun b (Sum (Node a b)) = (+ (Sum a) (Sum b)) -- Performs 2^n additions // Performs 2^n additions main = do (Main n) = (Sum (Gen n)) n <- read.head <$> getArgs :: IO Word32 print $ sun (gen n) [TreeSum] TreeSum recursively builds and sums all elements of a perfect binary tree. HVM outperforms Haskell by a wide margin because this algorithm is embarassingly parallel, allowing it to fully use the available cores. QuickSort (Parallel) main.hvm main.hs // QuickSort -- QuickSort (QSort p s Nil) = Empty qsort p s Nil = Empty (QSort p s (Cons x Nil)) = (Single x) qsort p s (Cons x Nil) = Single x (QSort p s (Cons x xs)) = qsort p s (Cons x xs) = (Split p s (Cons x xs) Nil Nil) split p s (Cons x xs) Nil Nil // Splits list in two partitions -- Splits list in two partitions (Split p s Nil min max) = split p s Nil min max = let s = (>> s 1) let s' = shiftR s 1 let min = (QSort (- p s) s min) min' = qsort (p - s') s' min let max = (QSort (+ p s) s max) max' = qsort (p + s') s' max (Concat min max) in Concat min' max' (Split p s (Cons x xs) min max) = split p s (Cons x xs) min max = (Place p s (< p x) x xs min max) place p s (p < x) x xs min max // Sorts and sums n random numbers -- Sorts and sums n random numbers (Main n) = main = do let list = (Randoms 1 (* 100000 n)) n <- read.head <$> getArgs :: IO Word32 (Sum (QSort Pivot Pivot list)) let list = randoms 1 (100000 * n) print $ sun $ qsort pivot pivot $ list [QuickSort] This test modifies QuickSort to return a concatenation tree instead of a flat list. This makes it embarassingly parallel, allowing HVM to outperform GHC by a wide margin again. It even beats Haskell's sort from Data.List! Note that flattening the tree will make the algorithm sequential. That's why we didn't chose MergeSort, as merge operates on lists. In general, trees should be favoured over lists on HVM. Composition (Optimal) main.hvm main.hs -- Computes f^(2^n) // Computes f^(2^n) comp 0 f x = f x (Comp 0 f x) = (f x) comp n f x = comp (n - 1) (\x -> f (f x)) x (Comp n f x) = (Comp (- n 1) lk(f (f k)) x) -- Performs 2^n compositions // Performs 2^n compositions main = do (Main n) = (Comp n lx(x) 0) n <- read.head <$> getArgs :: IO Int print $ comp n (\x -> x) (0 :: Int) [Compositio] This chart isn't wrong: HVM is exponentially faster for function composition, due to optimality, depending on the target function. There is no parallelism involved here. In general, if the composition of a function f has a constant-size normal form, then f^(2^N)(x) is linear-time (O(N)) on HVM, and exponential-time (O(2^N)) on GHC. This can be taken advantage of to design novel functional algorithms. I highly encourage you to try composing different functions and watching how their complexity behaves. Can you tell if it will be linear or exponential? Or how recursion will affect it? That's a very insightful experience! Lambda Arithmetic (Optimal) main.hvm main.hs // Increments a Bits by 1 -- Increments a Bits by 1 (Inc xs) = lex lox lix inc xs = Bits $ \ex -> \ox -> \ix -> let e = ex let e = ex let o = ix o = ix let i = lp (ox (Inc p)) i = \p -> ox (inc p) (xs e o i) in get xs e o i // Adds two Bits -- Adds two Bits (Add xs ys) = (App xs lx(Inc x) ys) add xs ys = app xs (\x -> inc x) ys // Multiplies two Bits -- Multiplies two Bits (Mul xs ys) = mul xs ys = let e = End let e = end let o = lp (B0 (Mul p ys)) o = \p -> b0 (mul p ys) let i = lp (Add ys (B0 (Mul p ys))) i = \p -> add ys (b1 (mul p ys)) (xs e o i) in get xs e o i // Squares (n * 100k) -- Squares (n * 100k) (Main n) = main = do let a = (FromU32 32 (* 100000 n)) n <- read.head <$> getArgs :: IO Word32 let b = (FromU32 32 (* 100000 n)) let a = fromU32 32 (100000 * n) (ToU32 (Mul a b)) let b = fromU32 32 (100000 * n) print $ toU32 (mul a b) [LambdaArit] This example takes advantage of beta-optimality to implement multiplication using lambda-encoded bitstrings. Once again, HVM halts instantly, while GHC struggles to deal with all these lambdas. Lambda encodings have wide practical applications. For example, Haskell's Lists are optimized by converting them to lambdas (foldr/build), its Free Monads library has a faster version based on lambdas, and so on. HVM's optimality open doors for an entire unexplored field of lambda-encoded algorithms that were simply impossible before. Charts made on plotly.com. How is that possible? Check HOW.md. How can I help? Most importantly, if you appreciate our work, help spreading the project! Posting on Reddit, communities, etc. helps more than you think. Second, I'm looking for partners! I believe HVM's current design is ready to scale and become the fastest runtime in the world, but much needs to be done to get there. We're also building interesting products built on top of it. If you'd like to get involved, please email me, or just send me a personal message on Twitter. Community To just follow the project, join our Telegram Chat, the Kindelia community on Discord or Matrix! About A massively parallel, optimal functional runtime in Rust Resources Readme License MIT License Stars 2.4k stars Watchers 36 watching Forks 56 forks Releases No releases published Packages 0 No packages published Contributors 10 * @VictorTaelin * @racs4 * @quleuber * @nothingnesses * @github-actions[bot] * @rigille * @lqd * @layus * @carlpaten * @pilla Languages * Rust 59.6% * C 18.7% * Nix 15.0% * JavaScript 4.1% * Haskell 1.9% * Shell 0.7% * (c) 2022 GitHub, Inc. * Terms * Privacy * Security * Status * Docs * Contact GitHub * Pricing * API * Training * Blog * About You can't perform that action at this time. You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.