SkillAgentSearch skills...

DeepCAD

DeepCAD paper code

Install / Use

/learn @mightyhorst/DeepCAD
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

DeepCAD

This repository provides source code for our paper:

DeepCAD: A Deep Generative Network for Computer-Aided Design Models

Rundi Wu, Chang Xiao, Changxi Zheng

ICCV 2021 (camera ready version coming soon)

<p align="center"> <img src='teaser.png' width=600> </p>

We also release the Onshape CAD data parsing scripts here: onshape-cad-parser.

🧑‍💻 Prerequisites

  • Linux
  • NVIDIA GPU + CUDA CuDNN
  • Python 3.7, PyTorch 1.5+

👬 Dependencies

Install python package dependencies through pip:

$ pip install -r requirements.txt

Install pythonocc (OpenCASCADE) by conda:

$ conda install -c conda-forge pythonocc-core=7.5.1

📦 Data

Download data from here (backup) and extract them under data folder.

  • cad_json contains the original json files that we parsed from Onshape and each file describes a CAD construction sequence.

  • cad_vec contains our vectorized representation for CAD sequences, which serves for fast data loading. They can also be obtained using dataset/json2vec.py. TBA.

  • train_val_test_split.json contains a json file with a reference to all the directories and files split into train, validation and test buckets. See below

  • Some evaluation metrics that we use requires ground truth point clouds. Run:

    $ cd dataset
    $ python json2pc.py --only_test
    

The data we used are parsed from Onshape public documents with links from ABC dataset. We also release our parsing scripts here for anyone who are interested in parsing their own data.

🗂️ Test/Validation/Train Data split

🗂️ data/train_val_test_split.json contains a json file with a reference to all the directories and files split into train, validation and test buckets. The format is as follows:

// 👉 json[bucket][directory/file]
{
  "train": {
    "0098/00980001"
  },
  "validation": {

  },
  "test": {

  },
}

where train is the bucket, 0098 is the directory under cad_json and 00980001 is the json file 00980001.json

eg. before

🗂️ data
   + 🗂️ cad_json
      + 🗂️ 0098
        +  📄 00980001.json

after running the code in 👉dataset/json2vec.py these files will be converted into vectors and stored as an h5 file

"""
@see 👉`dataset/json2vec.py`
"""

"""
 @step Load CADSequence data from a dictionary
"""
cad_seq = CADSequence.from_dict(data)

"""
 @step  Normalize the CADSequence data to fit within a standardized size
"""
cad_seq.normalize()

"""
 @step  Numericalize the CADSequence data by converting continuous values into discrete integers
"""
cad_seq.numericalize()

"""
 @step  
    Convert the CADSequence data into a vector representation with specific constraints
    
    The arguments 
        MAX_N_EXT, 
        MAX_N_LOOPS, 
        MAX_N_CURVES, 
        MAX_TOTAL_LEN 
    determine the maximum limits.

    These are set in 👉cadlib/macro.py

    pad=False indicates that the output vector won't be padded if the constraints are not met
"""
cad_vec = cad_seq.to_vector(
    MAX_N_EXT, 
    MAX_N_LOOPS, 
    MAX_N_CURVES, 
    MAX_TOTAL_LEN, 
    pad=False,
)

graph TD;
    00980001.json-->data
    data-->cad_seq
    cad_seq-->normalize
    normalize-->numericalize-->cad_vec
    cad_vec-->00980001.h5

data folder after json 2 vec:

🗂️ data
   + 🗂️ cad_json
      + 🗂️ 0098
        +  📄 00980001.json
   + 🗂️ cad_vec
      + 🗂️ 0098
        +  📄 00980001.h5

🏋️ Training

🏋️ Pre-trained models

Download pretrained model from here (backup) and extract it under proj_log. All testing commands shall be able to excecuted directly, by specifying --exp_name=pretrained when needed.

🏋️ Training models

See all hyper-parameters and configurations under config folder.

🏋️ LGAN configuration arguments

The list of arguments that can be passed to the lgan.py file. The list is configured in the 👉config/configLGAN.py file

# ----------------
# 🏋️ lgan example:
# ----------------
#   🎃exp_name - Name of the experiment
#   🗂️proj_dir - Name of the project directory which is `proj_log` by default
#   ⛳️ae_ckpt - Checkpoint for the autoencoder
#   💻gpu_ids - GPU(s) to use 
#
#  👉folder: 🗂️proj_log/newDeepCAD
#
python lgan.py --exp_name newDeepCAD --ae_ckpt 1000 -g 0

| Argument Name | Type | Default Value | Description | | -------------- | ------ | ------------- | ---------------------------------------------------------- | | proj_dir | str | proj_log | Path to the project folder where models and logs are saved | | exp_name | str | Required | Name of the experiment | | ae_ckpt | str | Required | Checkpoint for the autoencoder | | continue | boolean| False | Continue training from checkpoint | | ckpt | str | latest | Desired checkpoint to restore (optional) | | test | boolean| False | Test mode | | n_samples | int | 100 | Number of samples to generate when testing | | gpu_ids | str | 0 | GPU(s) to use (e.g., "0" for one GPU, "0,1,2" for multiple GPUs; CPU not supported) | | batch_size | int | 256 | Batch size | | num_workers | int | 8 | Number of workers for data loading | | n_iters | int | 200000 | Total number of iterations to train | | save_frequency | int | 100000 | Save models every x iterations | | lr | float | 2e-4 | Initial learning rate |

🏋️ Training configuration arguments

The list of arguments that can be passed to the train.py and/or test.py file. The list is configured in the 👉config/configAE.py file

# -----------------
# 🏋️ train example:
# -----------------
#   🎃exp_name - Name of the experiment
#   🗂️proj_dir - Name of the project directory which is `proj_log` by default
#   💻gpu_ids - GPU(s) to use 
#
#  👉folder: 🗂️proj_log/newDeepCAD
#
python train.py --exp_name newDeepCAD -g 0

| Argument Name | Type | Default Value | Description | | ----------------- | ------- | --------------- | ---------------------------------------------------------- | | proj_dir | str | proj_log | Path to the project folder where models and logs are saved | | data_root | str | data | Path to the source data folder | | exp_name | str | Current folder name | Name of the experiment | | gpu_ids | str | 0 | GPU(s) to use (e.g., "0" for one GPU, "0,1,2" for multiple GPUs; CPU not supported) | | batch_size | int | 512 | Batch size | | num_workers | int | 8 | Number of workers for data loading | | nr_epochs | int | 1000 | Total number of epochs to train | | lr | float | 1e-3 | Initial learning rate | | grad_clip | float | 1.0 | Gradient clipping value | | warmup_step | int | 2000 | Step size for learning rate warm-up | | continue | boolean | False | Continue training from checkpoint | | ckpt | str | latest | Desired checkpoint to restore (optional) | | vis | boolean | False | Visualize output during training | | save_frequency | int | 500 | Save models every x epochs | | val_frequency | int | 10 | Run validation every x iterations | | vis_frequency | int | 2000 | Visualize output every x iterations | | augment | boolean | False | Use random data augmentation |

Train the autoencoder

To train the autoencoder:

python train.py --exp_name newDeepCAD -g 0

For random generation, further train a latent GAN:

# encode all data to latent space
python test.py --exp_name newDeepCAD --mode enc --ckpt 1000 -g 0

# train latent GAN (wgan-gp)
$ python lgan.py --exp_name newDeepCAD --ae_ckpt 1000 -g 0

The trained models and experment logs will be saved in proj_log/newDeepCAD/ by default.


🧪 Testing and Evaluation

Autoencoding

After training the autoencoder, run the model to reconstruct all test data:

$ python test.py --exp_name newDeepCAD --mode rec --ckpt 1000 -g 0

The results will be saved inproj_log/newDeepCAD/results/test_1000 by default in the format of h5 (CAD seq

View on GitHub
GitHub Stars8
CategoryDevelopment
Updated6mo ago
Forks4

Languages

Python

Security Score

72/100

Audited on Sep 24, 2025

No findings