Deepflows
DeepFlows is a lightweight deep learning framework for teaching and experimentation. It includes tensor encapsulation, automatic differentiation, backend abstraction, common neural network modules and optimizers, as well as some example training scripts and service modules.
Install / Use
/learn @Ch0ser/DeepflowsREADME
Directory Structure (Brief)
DeepFlows/tensor.py— Core implementation of Tensor and Computation Graph. (SeeDeepFlows/tensor.py)autograd.py— Autograd switch (no_grad/enable_grad) and context management. (SeeDeepFlows/autograd.py)backend/— Backend abstraction andBackendTensor(CPU/CPU-Numpy/CUDA protocols) for multi-backend compatibility. (SeeDeepFlows/backend/backend_tensor.py)nn/— Neural network modules and functions (includingModulebase class, common layers, loss functions, etc.). CoreModuleimplementation is innn/modules/module.py(SeeDeepFlows/nn/modules/module.py).optim/— Optimizers (Adam/SGD/Adagrad/Adadelta, etc.).utils/— Data loading, evaluation, and visualization tools.dist/,MyDLFW_serving/— Distributed training and model serving prototypes/examples.
Quick Start
-
Environment Preparation
- Install
python>=3.8andnumpy. - Optional: Using the CUDA backend requires the CUDA toolkit and a compiled backend
.pyd(see below). You can choose to compile it yourself or import the compiled.pydfile directly.
- Install
-
Data Preparation
- MNIST: Place raw IDX files in
Deepflows\data\MNIST\raw(includingtrain-images-idx3-ubyte,train-labels-idx1-ubyte,t10k-images-idx3-ubyte,t10k-labels-idx1-ubyte). - CIFAR-10: Place Python version batch data in
Deepflows\data\cifar-10-batches-py(includingdata_batch_1..5,test_batch).
- MNIST: Place raw IDX files in
-
Run Example Scripts (For details see test/scripts_description_en.md)
- Basic & MLP
- Linear Regression (CPU):
python test/LinearRegression.py - MLP-MNIST (CPU/CUDA):
python test/MLP_MNIST.py/python test/MLP_MNIST_cuda.py
- Linear Regression (CPU):
- CNN (Convolutional Neural Networks)
- MNIST (CPU/CUDA):
python test/CNN_MNIST.py/python test/CNN_MNIST_cuda.py - CIFAR-10 (CPU/CUDA):
python test/CNN_CIFAR10.py/python test/CNN_CIFAR10_cuda.py - Animal-10 (CUDA):
python test/CNN_Animal10_cuda.py - Dishes (CUDA):
python test/CNN_Dishes_cuda.py
- MNIST (CPU/CUDA):
- Advanced Architectures (ResNet/MobileNet)
- ResNet (Animal-10/CIFAR-10, CUDA):
python test/ResNet_Animal10_cuda.py,python test/ResNet_CIFAR10_cuda.py - MobileNet Implementation:
test/MobileNet.py
- ResNet (Animal-10/CIFAR-10, CUDA):
- Functional Tests
- CUDA Low-level Test:
python test/test_cuda.py - Model Save/Load Test:
python test/CNN_CIFAR10_cuda_model_save_load_test.py
- CUDA Low-level Test:
- Basic & MLP
Note: CUDA version scripts explicitly use
device='cuda'. If the CUDA backend is not enabled, these scripts will fail to run. Please compile the backend or import the compiled product first.
Core Usage Points
- Tensor & Backend: Use
Tensor(...)or directlyBackendTensor(...)to create data; the backend automatically bridges NumPy / custom backends. - Autograd: Enabled by default. Explicitly control via
with DeepFlows.no_grad():orwith DeepFlows.enable_grad():(seeautograd.py). - Build Model: Inherit from
nn.Module, implementforward, and usemodel.parameters()with an optimizer for training.
Example Code Locations
- Package Export Entry:
DeepFlows/__init__.py - Tensor Core:
DeepFlows/tensor.py - Autograd:
DeepFlows/autograd.py - Backend Abstraction:
DeepFlows/backend/backend_tensor.py - Example Scripts: The
test/directory contains training scripts for various network architectures (CNN, ResNet, MobileNet) and datasets (MNIST, CIFAR-10, Animal-10). See test/scripts_description_en.md.
Pretrained Models & Transfer Learning
DeepFlows supports loading pretrained weights from common vision models (sourced from PyTorch), providing a foundation for transfer learning. Core implementation is located in DeepFlows/utils/pretrained_models.py.
Key Features
- Model Support: Currently supports
resnet18,resnet50,mobilenet_v1,vgg16. - Weight Conversion: Provides tools to automatically convert weights from PyTorch format to DeepFlows format.
- Automatic Management: Supports automatic downloading, caching, and loading of pretrained weights.
Quick Start
You can quickly create a model loaded with pretrained weights using:
from DeepFlows.utils.pretrained_models import create_model_with_pretrained_weights
# Automatically download (if needed), convert, and create a model with loaded weights
# Supported model names: 'resnet18', 'resnet50', 'mobilenet_v1', 'vgg16'
model = create_model_with_pretrained_weights('resnet18')
Or run the test script to experience the full process:
python test/test_pretrained_models.py
This script demonstrates how to list available models, download weights, convert formats, and load them into a model.
FAQ & Assumptions
- Assumes you have Python and NumPy. CPU mode requires no extra compilation.
- Using GPU requires enabling the CUDA backend: The project tries
from DeepFlows.backend.backend_src.build.Release import CUDA_BACKEND. If import fails, it is treated as disabled.
CUDA Backend: Compilation & Import
-
Self-Compile (Windows)
- Dependencies: CUDA Toolkit (including NVCC), CMake, Visual Studio Build Tools.
- Steps:
- Enter
DeepFlows/backend/backend_src, use CMake to generate VS project, buildRelease. - Upon success, the product is located at
DeepFlows/backend/backend_src/build/Release/CUDA_BACKEND.pyd. - This path matches the project's import statement, no extra config needed.
- Enter
-
Directly Import Compiled
.pyd- Place the compiled
CUDA_BACKEND.pydintoDeepFlows/backend/backend_src/build/Release/. - Python will automatically load it via
from DeepFlows.backend.backend_src.build.Release import CUDA_BACKEND.
- Place the compiled
-
Verify Backend Enabled
- Execute in Python REPL:
from DeepFlows.backend.backend_tensor import cudaprint(cuda().enabled())
- If output is
True, the CUDA backend is correctly loaded; otherwise, it falls back to disabled status.
- Execute in Python REPL:
DeepFlows Visualization Platform Guide
Welcome to the DeepFlows Visualization Platform! This is a full-stack application based on Vue 3 + FastAPI, designed to demonstrate the training process and model building capabilities of our self-developed deep learning framework DeepFlows via an intuitive Web interface.
🛠️ Environment Setup & Installation
1. Basic Requirements
- Python: 3.10 or higher
- Node.js: 16.0 or higher (LTS recommended)
- CUDA (Optional): If GPU acceleration is needed, ensure NVIDIA CUDA Toolkit is installed (11.x or 12.x recommended).
2. Backend Configuration (Visualization_backend)
The backend is responsible for running the deep learning framework, handling training tasks, and pushing real-time data via WebSockets.
Step 1: Create Virtual Environment
It is recommended to use venv or conda to create an isolated environment to avoid polluting the global Python environment.
# In project root
python -m venv venv
# Activate environment (Windows)
.\venv\Scripts\Activate.ps1
# Activate environment (Linux/Mac)
source venv/bin/activate
Step 2: Install Dependencies
We need to install FastAPI service dependencies and monitoring tools.
cd Visualization_backend
pip install fastapi uvicorn[standard] websockets psutil nvidia-ml-py scikit-learn
Note: The core framework
DeepFlowsdepends on the underlying C++ compiled extension (.pydor.so). Please ensure that the compiled file exists in theDeepFlows/backend/backend_src/build/Releasedirectory, or compile it first according to the framework documentation.
Step 3: Start Backend Service
# Ensure you are in the activated virtual environment
python server.py
After successful startup, you will see:
INFO: Uvicorn running on http://0.0.0.0:8000[System] Server starting up...
3. Frontend Configuration (Visualization_frontend)
The frontend provides a visual interactive interface, including dashboards, model builders, and training configurations.
Step 1: Install Dependencies
cd Visualization_frontend
npm install
Step 2: Configure Environment Variables (Optional)
By default, the frontend tries to connect to local http://127.0.0.1:8000. To modify, edit the .env file:
VITE_API_URL=http://127.0.0.1:8000/api
VITE_WS_URL=ws://127.0.0.1:8000/ws
Step 3: Start Frontend Dev Server
npm run dev
Visit the address output in the console (usually http://localhost:3000) to enter the system.
🚀 Usage Guide
1. Dashboard
- Real-time Monitoring: Shows current Epoch, Batch, Loss, and Accuracy.
- Visual Charts:
- Loss & Accuracy: Dynamic line charts, plotting training curves in real-time.
- Resource Usage: Real-time display of CPU, Memory, and GPU (if available) usage.
- Full Screen Mode: Click the full-screen icon in the top right for an immersive experience.
2. Model Builder
This is a "Low-Code" model building tool allowing you to design neural networks via drag-and-drop.
- Add Layer: Drag
Conv2d,Linear,ReLU, etc., from the left panel to the center canvas. - Edit Parameters:
- Click the Edit Icon (✏️) on the layer card.
- Modify par
