Lam
:rocket: a lightweight, universal actor-model vm for writing scalable and reliable applications that run natively and on WebAssembly
Install / Use
/learn @leostera/LamREADME
LAM is a lightweight, universal virtual machine for writing scalable and reliable applications that run natively and on WebAssembly.
It is inspired by Erlang and Lua, and it is compatible with the Erlang VM.
LAM lets you reuse the same programming paradigm, known for being productive, across your entire application stack.
Come join us on Discord! (Ps: we share a server with Caramel)
Features
- Runs Natively and on WebAssembly -- pick and choose your runtime!
- Easy to Target -- a small and specified bytecode with a text and binary format
- Erlang VM compatibility -- run your existing Erlang, Elixir, Caramel, and Gleam code
- Seamless multi-core -- built to scale from one to thousands of cores for free
- Extreme reliability -- use Erlang's OTP supervision patterns
Status
Still under heavy development!
There's plenty of work to be done for it to be fully usable, but we keep a few tracking issues here:
The Erlang and Elixir ecosystem compatibility is tracked here:
Getting Started
You can download the latest binary from the releases
page. After
unpacking it you should be able to add it to your PATH env and start playing
around with the lam binary.
Like this:
# in this example I'm running linux with glibc
$ wget https://github.com/AbstractMachinesLab/lam/releases/download/v0.0.7/lam-v0.0.7-x86_64-unknown-linux-gnu.tar.gz
$ tar xzf lam-*
$ export PATH=$(pwd)/lam/bin:$PATH
Now we can do a quick test. Make a file test.erl with this contents:
-module(test).
-export([main/1]).
main([]) -> ok;
main([Name|T]) ->
io:format(<<"Hello, ~p!\n">>, [Name]),
main(T).
And we can compile it to BEAM byte code and use LAM to build a binary for it, like this:
$ erlc test.erl
$ lam build test.beam --output test.exe --target native --entrypoint test
$ ./test.exe Joe Robert Mike
Hello, Joe!
Hello, Robert!
Hello, Mike!
How does it work?
LAM compiles your .beam files ahead of time into a representation that's optimized for running them.
Then it bundles that with the appropriate target runtime into some binary output.
binary
instructions
+------------+ output
.beam files +---->| 1001101110 |-----+ +-----------+
+------------+ | | .exe |
|--->| .wasm |
+-------------+ | | .wasm/.js |
| LAM RunTime |----+ +-----------+
+-------------+
Related Skills
qqbot-channel
343.3kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
99.7k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
343.3kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
project-overview
FlightPHP Skeleton Project Instructions This document provides guidelines and best practices for structuring and developing a project using the FlightPHP framework. Instructions for AI Coding A
