SkillAgentSearch skills...

PowerInfer

High-speed Large Language Model Serving for Local Deployment

Install / Use

/learn @Tiiny-AI/PowerInfer
About this skill

Quality Score

0/100

Category

Operations

Supported Platforms

Universal

README

PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU

TL;DR

PowerInfer is a CPU/GPU LLM inference engine leveraging activation locality for your device.

<a href="https://trendshift.io/repositories/6186" target="_blank"><img src="https://trendshift.io/api/badge/repositories/6186" alt="SJTU-IPADS%2FPowerInfer | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>

License: MIT

Project Kanban

Latest News 🔥

  • [2026/1/5] We released Tiiny AI Pocket Lab, the world's first pocket-size supercomputer. It runs GPT-OSS-120B (int4) locally at 20 tokens/s. Featured at CES 2026.
  • [2025/7/27] We released SmallThinker-21BA3B-Instruct and SmallThinker-4BA0.6B-Instruct. We also released a corresponding framework for efficient on-device inference.
  • [2024/6/11] We are thrilled to introduce PowerInfer-2, our highly optimized inference framework designed specifically for smartphones. With TurboSparse-Mixtral-47B, it achieves an impressive speed of 11.68 tokens per second, which is up to 22 times faster than other state-of-the-art frameworks.
  • [2024/6/11] We are thrilled to present Turbo Sparse, our TurboSparse models for fast inference. With just $0.1M, we sparsified the original Mistral and Mixtral model to nearly 90% sparsity while maintaining superior performance! For a Mixtral-level model, our TurboSparse-Mixtral activates only 4B parameters!
  • [2024/5/20] Competition Recruitment: CCF-TCArch Customized Computing Challenge 2024. The CCF TCARCH CCC is a national competition organized by the Technical Committee on Computer Architecture (TCARCH) of the China Computer Federation (CCF). This year's competition aims to optimize the PowerInfer inference engine using the open-source ROCm/HIP. More information about the competition can be found here.
  • [2024/5/17] We now provide support for AMD devices with ROCm.
  • [2024/3/28] We are trilled to present Bamboo LLM that achieves both top-level performance and unparalleled speed with PowerInfer! Experience it with Bamboo-7B Base / DPO.
  • [2024/3/14] We supported ProSparse Llama 2 (7B/13B), ReLU models with ~90% sparsity, matching original Llama 2's performance (Thanks THUNLP & ModelBest)!
  • [2024/1/11] We supported Windows with GPU inference!
  • [2023/12/24] We released an online gradio demo for Falcon(ReLU)-40B-FP16!
  • [2023/12/19] We officially released PowerInfer!

Demo 🔥

https://github.com/SJTU-IPADS/PowerInfer/assets/34213478/fe441a42-5fce-448b-a3e5-ea4abb43ba23

PowerInfer v.s. llama.cpp on a single RTX 4090(24G) running Falcon(ReLU)-40B-FP16 with a 11x speedup!

<sub>Both PowerInfer and llama.cpp were running on the same hardware and fully utilized VRAM on RTX 4090.</sub>

[!NOTE] Live Demo Online⚡️

Try out our Gradio server hosting Falcon(ReLU)-40B-FP16 on a RTX 4090!

<sub>Experimental and without warranties 🚧</sub>

Abstract

We introduce PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC) equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high locality inherent in LLM inference, characterized by a power-law distribution in neuron activation.

This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated across inputs, while the majority, cold neurons, vary based on specific inputs. PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers. PowerInfer further integrates adaptive predictors and neuron-aware sparse operators, optimizing the efficiency of neuron activation and computational sparsity.

Evaluation shows that PowerInfer attains an average token generation rate of 13.20 tokens/s, with a peak of 29.08 tokens/s, across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU, only 18% lower than that achieved by a top-tier server-grade A100 GPU. This significantly outperforms llama.cpp by up to 11.69x while retaining model accuracy.

Features

PowerInfer is a high-speed and easy-to-use inference engine for deploying LLMs locally.

PowerInfer is fast with:

  • Locality-centric design: Utilizes sparse activation and 'hot'/'cold' neuron concept for efficient LLM inference, ensuring high speed with lower resource demands.
  • Hybrid CPU/GPU Utilization: Seamlessly integrates memory/computation capabilities of CPU and GPU for a balanced workload and faster processing.

PowerInfer is flexible and easy to use with:

  • Easy Integration: Compatible with popular ReLU-sparse models.
  • Local Deployment Ease: Designed and deeply optimized for local deployment on consumer-grade hardware, enabling low-latency LLM inference and serving on a single GPU.
  • Backward Compatibility: While distinct from llama.cpp, you can make use of most of examples/ the same way as llama.cpp such as server and batched generation. PowerInfer also supports inference with llama.cpp's model weights for compatibility purposes, but there will be no performance gain.

You can use these models with PowerInfer today:

  • Falcon-40B
  • Llama2 family
  • ProSparse Llama2 family
  • Bamboo-7B

We have tested PowerInfer on the following platforms:

  • x86-64 CPUs with AVX2 instructions, with or without NVIDIA GPUs, under Linux.
  • x86-64 CPUs with AVX2 instructions, with or without NVIDIA GPUs, under Windows.
  • Apple M Chips (CPU only) on macOS. (As we do not optimize for Mac, the performance improvement is not significant now.)

And new features coming soon:

  • Metal backend for sparse inference on macOS

Please kindly refer to our Project Kanban for our current focus of development.

Getting Started

Setup and Installation

Pre-requisites

PowerInfer requires the following dependencies:

  • CMake (3.17+)
  • Python (3.8+) and pip (19.3+), for converting model weights and automatic FFN offloading

Get the Code

git clone https://github.com/Tiiny-AI/PowerInfer
cd PowerInfer
pip install -r requirements.txt # install Python helpers' dependencies

Build

In order to build PowerInfer you have two different options. These commands are supposed to be run from the root directory of the project.

Using CMake(3.17+):

  • If you have an NVIDIA GPU:
cmake -S . -B build -DLLAMA_CUBLAS=ON
cmake --build build --config Release
  • If you have an AMD GPU:
# Replace '1100' to your card architecture name, you can get it by rocminfo
CC=/opt/rocm/llvm/bin/clang CXX=/opt/rocm/llvm/bin/clang++ cmake -S . -B build -DLLAMA_HIPBLAS=ON -DAMDGPU_TARGETS=gfx1100
cmake --build build --config Release
  • If you have just CPU:
cmake -S . -B build
cmake --build build --config Release

Model Weights

PowerInfer models are stored in a special format called PowerInfer GGUF based on GGUF format, consisting of both LLM weights and predictor weights.

Download PowerInfer GGUF via Hugging Face

You can obtain PowerInfer GGUF weights at *.powerinfer.gguf as well as profiled model activation statistics for 'hot'-neuron offloading from each Hugging Face repo below.

| Base Model | PowerInfer GGUF | | --------------------- | ------------------------------------------------------------------------------------------------------------- | | LLaMA(ReLU)-2-7B | PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF | | LLaMA(ReLU)-2-13B | PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF | | Falcon(ReLU)-40B | PowerInfer/ReluFalcon-40B-PowerInfer-GGUF | | LLaMA(ReLU)-2-70B | PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF | | ProSparse-LLaMA-2-7B | PowerInfer/ProSparse-LLaMA-2-7B-GGUF | | ProSparse-LLaMA-2-13B | PowerInfer/ProSparse-LLaMA-2-13B-GGUF | | Bamboo-base-7B 🌟 | PowerInfer/Bamboo-base-v0.1-gguf | | Bamboo-DPO-7B 🌟 | PowerInfer/Bamboo-DPO-v0.1-gguf |

We recommend using huggingface-cli to download the whole model repo. For example, the following command will download PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF into the `./ReluLLaMA-

View on GitHub
GitHub Stars9.1k
CategoryOperations
Updated36m ago
Forks538

Languages

C++

Security Score

100/100

Audited on Mar 26, 2026

No findings