SkillAgentSearch skills...

Libllama

llama.cpp bindings for JavaScript

Install / Use

/learn @littledivy/Libllama
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

libllama

Run llama.cpp from Deno

import { Llama } from "jsr:@divy/libllama";
import process from "node:process";

const engine = new Llama({
  model: process.argv[2] || "./models/llama-2-7b-chat.Q2_K.gguf",
});

const text = process.argv[3];
engine.predict(text, {
  tokenCallback: (token) => {
    process.stdout.write(token);
    return true;
  },
});

Building

Make sure to clone the repository with submodules and install prerequisites for building llama.cpp.

Build deno-llama using cmake:

mkdir build
cd build
cmake ..
make

Run the example:

export LIBLLAMA_PATH=../build/libllama.dylib
deno run --allow-ffi example.ts \
    ./models/llama-2-7b-chat.Q2_K.gguf \
    "What is the meaning of life?"

License

This project is licensed under the MIT License.

Thanks

Thanks to the authors of llama.cpp and go-llama. A lot of the code in this repository is based on their work.

View on GitHub
GitHub Stars11
CategoryDevelopment
Updated6mo ago
Forks0

Languages

C++

Security Score

82/100

Audited on Sep 29, 2025

No findings