8 skills found
jundot / OmlxLLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar
jjang-ai / MlxstudioMLX Studio - Home of JANG_Q - Image Gen/Edit + Chat/Code All in one - + OpenClaw (Anthropic API)
kspviswa / PyOMlxA wannabe Ollama equivalent for Apple MlX models
jjang-ai / VmlxvMLX - Home of JANG_Q - Cont Batch, Prefix, Paged, KV Cache Quant, VL - Powers MLX Studio. Image gen/edit, OpenAI/Anth
jjang-ai / JangqJANG — GGUF for MLX. YOU MUST USE JANG_Q RUNTIME. Adaptive Mixed-Precision Quantization + Runtime for Apple Silicon
lisihao / ThunderOMLXMac mini 最强本地推理引擎 - 融合 oMLX、ThunderLLAMA、ClawGate 的优势,配备 Web 管理面板和 macOS 菜单栏应用
ZhengRui-Chen / GlintMinimal local deployment for HY-MT1.5-1.8B-4bit on Apple Silicon using oMLX
Mizistein / Omlx🤖 Optimize LLM inference on Mac with continuous batching and SSD caching managed from your menu bar for efficient performance.