Ll3m
LL3M writes Python code that generates 3D assets in Blender.
Install / Use
/learn @threedle/Ll3mREADME
LL3M
<br>Notice: The model used in the paper, Claude Sonnet 3.7, has been retired. As a result, we have discontinued the LL3M server.

Roadmap
- Abstract
- LL3M Setup
- Blender Setup
- Authentication
- Demo
- Limitation
- Frequently Asked Questions (FAQ)
- Feedback
- BibTeX
Abstract
We present LL3M, a multi-agent system that leverages pretrained large language models (LLMs) to generate 3D assets by writing interpretable Python code in Blender. We break away from the typical generative approach that learns from a collection of 3D data. Instead, we reformulate shape generation as a code-writing task, enabling greater modularity, editability, and integration with artist workflows. Given a text prompt, LL3M coordinates a team of specialized LLM agents to plan, retrieve, write, debug, and refine Blender scripts that generate and edit geometry and appearance. The generated code serves as a high-level, interpretable, human-readable, well-documented representation of scenes and objects, making full use of sophisticated Blender constructs (e.g., BMesh, geometry modifiers, shader nodes) for diverse, unconstrained shapes, materials, and scenes. This code presents many avenues for further agent and human editing and experimentation via code tweaks or procedural parameters. This medium naturally enables a co‑creative loop in our system: agents can automatically self‑critique using code and visuals, while iterative user instructions provide an intuitive way to refine assets. A shared code context across agents enables awareness of previous attempts, and a retrieval‑augmented generation knowledge base built from Blender API documentation (BlenderRAG) equips agents with examples, types, and functions, enabling advanced modeling operations and improving code correctness. We demonstrate the effectiveness of LL3M across diverse shape categories, style and material edits, and user‑driven refinements. Our experiments showcase the power of code as a generative and interpretable medium for 3D asset creation.
LL3M Setup
To get started, make sure both Blender and LL3M are running on the same machine.
Requirements
| Minimum Requirements | Recommended Requirements | |--------------------------|------------------------------| | OS: Windows 10/11, macOS 10.15+, or Linux (Ubuntu 18.04+) | OS: Windows 11, macOS 12+, or Linux (Ubuntu 20.04+) | | CPU: Intel Core i5-8400 / AMD Ryzen 5 2600 or equivalent | CPU: Intel Core i7-10700K / AMD Ryzen 7 3700X or better | | RAM: 8 GB | RAM: 16 GB (32 GB for complex scenes) | | GPU: DirectX 11 compatible graphics card with 2GB VRAM | GPU: NVIDIA RTX 3060 / AMD RX 6600 XT or better with 8GB+ VRAM | | Storage: 5 GB free space | Storage: 10 GB free space (SSD recommended) |
Installation
- Clone the repository:
git clone https://github.com/threedle/ll3m.git
cd ll3m
- Create and activate a conda environment:
conda create -n ll3m python=3.12 -y
conda activate ll3m
- Install the required packages:
pip install -r requirements.txt
Blender Setup
We used Blender 4.4 in our experiments.
Install Blender 4.4
Download Blender 4.4 here.
You can find the appropriate installer for your operating system here.
The specific Blender version does not matter, any Blender 4.4.x release will work.
Install the LL3M Blender Addon
-
Locate the
./blender/addon.pyfile in this repository. -
Open Blender.
-
Go to Edit > Preferences > Add-ons.

-
Click on the arrow on the right > "Install from Disc...", and select the
./blender/addon.pyfile.
-
Enable the addon by checking the box next to "LL3M Blender" and close the Preferences window.
-
If you close and reopen Blender and the "LL3M Blender" add-on does not appear, go to Edit > Preferences > Add-ons, and enable it again.

Enable the Blender Addon
-
In Blender, go to the 3D View sidebar (press N if not visible).

-
Click the "LL3M" tab, then click "Start LL3M Server".

When you see "Running on port 8888", you can proceed.
Authentication
LL3M is a client-server application. For this demo, you need an account to use it.
Note: Each account is limited to 5 requests per day.
Login
python main.py --login
This opens a browser window for sign‑in. After success, tokens are stored locally and used automatically by main.py.
Accept Terms
Before using the LL3M service, you must accept the terms and conditions.
python main.py --accept-terms
You'll be prompted to type "yes" to confirm your agreement.
You only need to accept the terms and conditions once; they do not expire.
Logout
python main.py --logout
Use this to remove tokens from your machine, or if you need to switch accounts.
Authentication tokens are valid for one hour. When your token expires, simply rerun
--loginto sign in again.
Privacy Notice: Your account information is used solely for authentication purposes with the LL3M service. LL3M does not access, store, or utilize your account credentials for any other purpose beyond verifying your identity and enabling access to the service.
Configuration
The LL3M client can be customized through the config/config.yaml file. Here's a quick reference for all available settings:
Customizable Configuration Settings
| Section | Setting | Default | Options | Description |
|---------|---------|---------|---------|-------------|
| blender | headless_rendering | false | true/false | Enable headless rendering in Blender |
| blender | gpu_rendering | false | true/false | Use GPU acceleration (faster if compatible GPU available) |
| render | num_images | 5 | 1-10 | Number of images to render per instruction |
| render | resolution_scale | 1 | 0.1-1.0 | Image resolution (1.0 = 1920x1080, 0.5 = 960x540) |
Configure this file before you run main.py.
If your hardware is low-end or you experience slow performance, consider lowering the
num_imagesor reducing theresolution_scaleinconfig/config.yamlto improve speed and stability.
Demo
Before you start
Double-check the following:
- Blender is open with LL3M enabled. If not, see Blender setup.
- You have successfully logged in. If not, see Login.
- You have accepted the Terms and Conditions. If not, see Accept Terms.
- Optional: the configuration
config/config.yamlis set for customiziable rendering. If not, see Configuration.
Usage
# Use text prompt
python main.py --text "Create a 3D object of a chair"
# Use image as input
python main.py --image /path/to/chair.png
# Get help
python main.py --help
--textand--imageare mutually exclusive, meaning you can only choose one. This is to prevent mismatches between text and image inputs.
Console Messages
During execution, you may see messages such as the following:
===== Executing Blender code from server =====
import bpy
scene = bpy.context.scene
_ = len(bpy.data.objects)
print(f'Blender preflight OK: scene={scene.name}, objects={_}')
===== End snippet =====
This indicates the client received a Blender Python (bpy) script from the server. The script will now be executed in Blender.
[Client] [OK] Blender code executed successfully! Waiting for next action from server...
Execution succeeded. The client is waiting for the next instruction from the server.
[Client] [RETRY] Blender code execution failed. Waiting for server to provide corrected version...
Error details: 'bpy_prop_collection[key]: key "Specular" not found'
This indicates execution failed. The error is returned to the server, which will respond with a corrected script.
[Initial Creation Phase: 0:01:00]
[Auto Refinement Phase: 0:01:00]
[User Guided Refinement Phase: 0:01:00]
These lines show the elapsed time for each phase.
For complex prompts, LL3M may require additional time. Continue waiting for the next instruction.
[Client] Render resolution: 1920 x 1080 (base 1920x1080 @ 100%)
[Client] Rendering method: CPU rendering (config setting)
[Client] Uploading 5 rendered images (prefix=render) to server...
[Client] Upload complete: 5 images uploaded.
[Client] [OK] Blender code executed successfully! Waiting for next action from server...
This indicates Blender is rendering images according to your configuration settings. Rendering can be computationally intensive and may temporarily freeze Blender. For details, see the Frequently Asked Questions (FAQ).
Indication for User Guided Refinement
[Phase] user_guided_refinement
[User Guided Refinement Phase: 0:00:00]
Enter the instruction: (Type 'TERMINATE' to exit)
(Type TERMINATE to exit)
[WARNING: Session will timeout after 3 minutes of inactivity]
<ENTER YOUR INPUT HERE>
This indicates the program has entered the user‑guided refinement phase and is waiting for input. Provide instructions to refine the 3D object, or type `TERMIN
Related Skills
node-connect
325.6kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
80.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
325.6kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
80.2kCommit, push, and open a PR
