Pyhuskylens
Clean Micropython implementation of the Huskylens protocol for UART and I2C
Install / Use
/learn @AntonsMindstorms/PyhuskylensREADME
PyHuskyLens
A universal Python library for connecting the HuskyLens AI camera to robotics platforms. Supports both V1 and V2 hardware with I2C and Serial (UART) interfaces. Works with MicroPython (LEGO, ESP32) and CPython (Raspberry Pi). Perfect for LEGO robotics, ESP32 projects, Raspberry Pi vision systems, and any Python-enabled device.
</div>Table of Contents
- Features
- Installation
- Quick Start
- Hardware Connection
- Example Projects
- API Reference
- Supported Platforms
- Troubleshooting
- License
- Author
Features
- 🤖 Auto-detection: Automatically detects HuskyLens V1 or V2 hardware
- 🔌 Dual Interface: Supports both I2C and Serial (UART) communication
- 🐍 Dual Platform: Works with MicroPython and CPython (Raspberry Pi)
- 🍓 Raspberry Pi Compatible: Native support for Raspberry Pi I2C and Serial
- 🎯 Full Algorithm Support: All 14 AI algorithms including face recognition, object tracking, pose detection, hand recognition, and more
- 🦾 Extended Detection: Full V2 support with facial landmarks, 21-point hand keypoints, and 17-point body pose
- 💾 Memory Optimized: Efficient bytearrays and data structures for MicroPython
- 🔄 Backward Compatible: Unified
HuskyLens()constructor works with all platforms - 🧩 Clean Architecture: Base class pattern with separate I2C and Serial implementations
- 📦 Platform Agnostic: Works with pybricks-micropython, standard MicroPython, SPIKE, Robot Inventor, and Raspberry Pi CPython
Installation
SPIKE Legacy and Robot Inventor 51515
Copy and paste the contents of pyhuskylens/pyhuskylens.py above your script.
ESP32 / MicroPython
Recommended: Using ViperIDE
The easiest way to install is using ViperIDE:
- Open ViperIDE in your browser
- Connect to your ESP32 device
- Go to Tools → Install package via link
- Enter:
github:antonvh/PyHuskyLens - Click Install
Alternative: Using mip
Install from your device (if internet-connected):
import mip
mip.install("github:antonvh/PyHuskyLens")
Manual Installation
Copy pyhuskylens/pyhuskylens.py to your device's filesystem.
EV3 with pybricks-micropython
Copy the library file to your project and import directly.
Raspberry Pi (CPython)
Install from PyPI:
pip install pyhuskylens
For I2C support, install smbus2:
pip install pyhuskylens[i2c]
For Serial/UART support, install pyserial:
pip install pyhuskylens[serial]
For both I2C and Serial:
pip install pyhuskylens[all]
Quick Start
Raspberry Pi I2C
from pyhuskylens import HuskyLens, ALGORITHM_OBJECT_RECOGNITION
# Use I2C bus number (0 or 1, typically 1 on Raspberry Pi)
hl = HuskyLens(1) # Automatically uses HuskyLensI2C_RPi
if hl.knock():
print("Connected to HuskyLens V" + str(hl.version))
# Set algorithm
hl.set_alg(ALGORITHM_OBJECT_RECOGNITION)
# Get detected objects
blocks = hl.get_blocks()
print("Found " + str(len(blocks)) + " objects")
for block in blocks:
print("Object at ({},{}) size {}x{}".format(
block.x, block.y, block.width, block.height))
Raspberry Pi Serial
from pyhuskylens import HuskyLens, ALGORITHM_FACE_RECOGNITION
# Use serial port path (typically /dev/ttyUSB0 or /dev/ttyAMA0)
hl = HuskyLens("/dev/ttyUSB0") # Automatically uses HuskyLensSerial_RPi
if hl.knock():
print("Connected!")
hl.set_alg(ALGORITHM_FACE_RECOGNITION)
while True:
blocks = hl.get_blocks(learned=True)
if len(blocks) > 0:
face = blocks[0]
print("Hello, ID: " + str(face.ID))
Simple Object Detection with Auto-Detection
from pyhuskylens import HuskyLens, ALGORITHM_OBJECT_RECOGNITION
# Auto-detects I2C or Serial/UART based on parameter type
# For I2C (ESP32)
from machine import Pin, SoftI2C
i2c = SoftI2C(scl=Pin(22), sda=Pin(21), freq=100000)
hl = HuskyLens(i2c)
# For SPIKE Prime (port string)
# hl = HuskyLens('E')
# For EV3 (Port object)
# from pybricks.parameters import Port
# hl = HuskyLens(Port.S1)
if hl.knock():
print("Connected to HuskyLens V" + str(hl.version))
# Set algorithm
hl.set_alg(ALGORITHM_OBJECT_RECOGNITION)
# Get detected objects
blocks = hl.get_blocks()
print("Found " + str(len(blocks)) + " objects")
for block in blocks:
print("Object at ({},{}) size {}x{}".format(
block.x, block.y, block.width, block.height))
Face Recognition
from pyhuskylens import HuskyLens, ALGORITHM_FACE_RECOGNITION
hl = HuskyLens(i2c) # or port string, or Port object
if hl.knock():
hl.set_alg(ALGORITHM_FACE_RECOGNITION)
while True:
blocks = hl.get_blocks(learned=True) # Only learned faces
if len(blocks) > 0:
face = blocks[0]
print("Hello, ID: " + str(face.ID))
# V2 only: Access facial landmarks
if face.type == "FACE":
print("Eyes: ({},{}) ({},{})".format(
face.leye_x, face.leye_y,
face.reye_x, face.reye_y))
Line Following with Arrows
from pyhuskylens import HuskyLens, ALGORITHM_LINE_TRACKING
hl = HuskyLens(Port.S1) # EV3/Pybricks example
hl.set_alg(ALGORITHM_LINE_TRACKING)
while True:
arrows = hl.get_arrows(learned=True)
if len(arrows) > 0:
arrow = arrows[0]
# Calculate steering based on arrow position
center_offset = arrow.x_head - 160
direction = arrow.direction # Angle in degrees
print("Steer: {} Direction: {}°".format(center_offset, direction))
Display Text on HuskyLens Screen
from pyhuskylens import HuskyLens, COLOR_GREEN
hl = HuskyLens(i2c)
# Clear any existing text
hl.clear_text()
# Show text (supports both old and new style)
hl.show_text("Hello Robot!", position=(100, 120))
hl.show_text("Status: OK", x=100, y=140, color=COLOR_GREEN)
Advanced: Hand Gesture Recognition (V2 Only)
from pyhuskylens import HuskyLens, ALGORITHM_HAND_RECOGNITION
hl = HuskyLens(i2c)
hl.set_alg(ALGORITHM_HAND_RECOGNITION)
while True:
results = hl.get() # Get all detection data
hands = results['hands'] # Or use HANDS constant
for hand in hands:
# Access 21 hand keypoints
print("Wrist: ({},{})".format(hand.wrist_x, hand.wrist_y))
print("Thumb tip: ({},{})".format(hand.thumb_tip_x, hand.thumb_tip_y))
print("Index tip: ({},{})".format(hand.index_finger_tip_x, hand.index_finger_tip_y))
# Calculate gesture from keypoint positions
thumb_up = hand.thumb_tip_y < hand.thumb_mcp_y
if thumb_up:
print("Thumbs up!")
Advanced: Body Pose Detection (V2 Only)
from pyhuskylens import HuskyLens, ALGORITHM_POSE_RECOGNITION
hl = HuskyLens(i2c)
hl.set_alg(ALGORITHM_POSE_RECOGNITION)
while True:
results = hl.get()
poses = results['poses'] # Or use POSES constant
for pose in poses:
# Access 17 body keypoints
print("Nose: ({},{})".format(pose.nose_x, pose.nose_y))
print("Shoulders: ({},{}) ({},{})".format(
pose.lshoulder_x, pose.lshoulder_y,
pose.rshoulder_x, pose.rshoulder_y))
# Calculate if person is standing/sitting based on keypoints
hip_y = (pose.lhip_y + pose.rhip_y) / 2
shoulder_y = (pose.lshoulder_y + pose.rshoulder_y) / 2
standing = abs(hip_y - shoulder_y) > 50
print("Standing" if standing else "Sitting")
Hardware Connection
I2C Connection
- HuskyLens V1: I2C address
0x32 - HuskyLens V2: I2C address
0x50 - Connections: SDA, SCL, GND, 5V
# Raspberry Pi Example (uses I2C bus number)
hl = HuskyLens(1) # I2C bus 1 (/dev/i2c-1)
# ESP32 Example
from machine import Pin, SoftI2C
i2c = SoftI2C(scl=Pin(22), sda=Pin(21), freq=100000)
hl = HuskyLens(i2c)
Raspberry Pi I2C Setup:
Enable I2C using raspi-config:
sudo raspi-config
# Select: Interface Options → I2C → Yes
Verify I2C is working:
sudo apt-get install i2c-tools
i2cdetect -y 1 # Should show 0x32 (V1) or 0x50 (V2)
Serial/UART Connection
- Baudrate: 9600
- HuskyLens Pin 1 (Green) = Tx → Connect to Robot Rx
- HuskyLens Pin 2 (Blue) = Rx → Connect to Robot Tx
- Also connect GND and 5V
# Raspberry Pi Example (uses serial port path)
hl = HuskyLens("/dev/ttyUSB0") # USB serial adapter
# or
hl = HuskyLens("/dev/ttyAMA0") # Built-in UART (Raspberry Pi 3/4/5)
# ESP32 Example
from machine import UART
uart = UART(1, baudrate=9600, tx=17, rx=16)
hl = HuskyLens(uart)
# EV3 Pybricks Example
from pybricks.iodevices import UARTDevice
from pybricks.parameters import Port
uart = UARTDevice(Port.S1, 9600)
hl = HuskyLens(uart)
# SPIKE Prime Example (port string auto-handles setup)
hl = HuskyLens('E') # Port E
Raspberry Pi Serial Setup:
For built-in UART (/dev/ttyAMA0), disable the serial console:
sudo raspi-config
# Select: Interface Options → Serial Port
# "Would you like a logi
Related Skills
node-connect
346.8kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
107.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
346.8kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
346.8kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
