ComplexGen
Code for SIGGRAPH 2022 paper: ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation
Install / Use
/learn @guohaoxiang/ComplexGenREADME
ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation
<p align="center"> <img src="/images/teaser.png" width="900"> </p>This is the official implementation of the following paper:
Guo H X, Liu S L, Pan H, Liu Y, Tong X, Guo B N. ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation. SIGGRAPH 2022
Abstract: We view the reconstruction of CAD models in the boundary representation (B-Rep) as the detection of geometric primitives of different orders, i.e., vertices, edges and surface patches, and the correspondence of primitives, which are holistically modeled as a chain complex, and show that by modeling such comprehensive structures more complete and regularized reconstructions can be achieved. We solve the complex generation problem in two steps. First, we propose a novel neural framework that consists of a sparse CNN encoder for input point cloud processing and a tri-path transformer decoder for generating geometric primitives and their mutual relationships with estimated probabilities. Second, given the probabilistic structure predicted by the neural network, we recover a definite B-Rep chain complex by solving a global optimization maximizing the likelihood under structural validness constraints and applying geometric refinements. Extensive tests on large scale CAD datasets demonstrate that the modeling of B-Rep chain complex structure enables more accurate detection for learning and more constrained reconstruction for optimization, leading to structurally more faithful and complete CAD B-Rep models than previous results.
<p align="center"> <img src="/images/pipeline.png" width="1000"> </p>The pipeline contains 3 main phases, we will show how to run the code for each phase, and provide the corresponding checkpoint/data.
Data downloading
We provide the pre-processed ABC dataset used for training and evaluating ComplexNet, you can download it from BaiduYun or OneDrive, which can be extracted by 7-Zip. You can find the details of pre-processing pipelines in the supplemental material of our paper.
The data contains surface points along with normals, and its ground truth B-Rep labels. After extracting the zip file under root directory, the data should be organized as the following structure:
ComplexGen
│
└─── data
│
└─── default
│ │
| └─── train
│ │
| └─── val
│ │
| └─── test
| |
| └─── test_point_clouds
|
└─── partial
│
└─── ...
<!-- Here _noise_002_ and _noise_005_ means noisy point clouds with normal-distribution-perturbation of mean value _0.02_ and _0.05_ respectively. -->
[Optional] You can also find the output of each phase from BaiduYun or OneDrive. For each test model, there will be 4 or 5 outputs:
*_input.ply: Input point cloud
*_prediction.pkl: Output of 'ComplexNet prediction' phase
*_prediction.complex: Visualizable file for *_prediction.pkl, elements with valid probability larger than 0.3 are kept.
*_extraction.complex: Output of 'complex extraction' phase
*_geom_refine.json: Output of 'geometric refinement' phase, which is also the final output.
The description and visualization of each file type can be found in pickle description, complex description and json description. If you want to directly evaluate the provided output data of ComplexGen, please put the extracted experiments folder under root folder ComplexGen, and conduct Environment setup and Evaluation
Phase 1: ComplexNet prediction
Environment setup with Docker
$ docker pull pytorch/pytorch:1.6.0-cuda10.1-cudnn7-devel
$ docker run --runtime=nvidia --ipc=host --net=host -v /path/to/complexgen/:/workspace -t -i pytorch/pytorch:1.6.0-cuda10.1-cudnn7-devel
$ cd /workspace
$ apt-get update && apt-get install libopenblas-dev -y && conda install numpy mkl-include pytorch cudatoolkit=10.1 -c pytorch -y && apt-get install git -y && pip install git+https://github.com/NVIDIA/MinkowskiEngine.git@v0.5.0 --user
$ cd chamferdist && python setup.py install --user && pip install numba --user && pip install methodtools --user && pip install tensorflow-gpu --user && pip install scipy --user && pip install rtree --user && pip install plyfile --user && pip install trimesh --user && cd ..
[Note]: If the 'apt-get update' gets error. To solve this problem, you can firstly run command 'rm /etc/apt/sources.list.d/cuda.list' (details shown in NVIDIA/nvidia-docker#619).
To test if the environment is set correctly, run:
$ ./scripts/train_small.sh
This command will start the training of ComplexNet on a small dataset with 64 CAD models.
Testing
<!--To test the trained ComplexNet, please first download the trained weights used in our paper from [BaiduYun](https://pan.baidu.com/s/1fvwURG1FWjazvQpVVASwMg?pwd=asdf) or [OneDrive](https://1drv.ms/u/s!Ar3e2GVr5NQN9Wyk28t7vqCWQZnO?e=3OZapR), and unzip it under the root directory:-->To test the trained ComplexNet, please first download the trained weights used in our paper from BaiduYun or GoogleDrive, and unzip it under the root directory:
ComplexGen
│
└─── experiments
│
└─── default
│ │
| └─── ckpt
│ │
| └─── *.pth
└─── ...
Then run:
$ ./scripts/test_default.sh
You can find network prediction of each model (*.pkl) under ComplexGen/experiments/default/test_obj/. The description of each pickle file (*.pkl) can be found here.
You can also get the visualizable models of corner/curve/patch of some test data by running:
$ ./scripts/test_default_vis.sh
A set of 3D models will be generated under ComplexGen/experiments/default/vis_test/ which can be visualized using 3D softwares like MeshLab.
<!-- We also provided the forwarded pickle file here (todo). If you want to use it, please download and unzip it under the root directory. -->Training
If you want to train ComplexNet from scratch, run:
$ ./scripts/train_default.sh
By default, ComplexNet is trained on a server with 8 V100 GPUs. You can change the numder of GPUs by setting the --gpu flag in ./scripts/train_default.sh, and change batch size by setting the batch_size flag. The training takes about 3 days to converge.
Phase 2: Complex extraction
Environment setup
$ pip install gurobipy==9.1.2 && pip install Mosek && pip install sklearn
Note that you need also mannully setup Gurobi license.
To conduct complex extraction, run:
$ ./scripts/extraction_default.sh
A set of complex file will be generated under ComplexGen/experiments/default/test_obj/. The description and visualization of complex file can be found here. As the average extraction time for each model is 10 minutes, we recommend you to conduct complex extraction on a multi-thread cpu server. To do this, just set flag_parallel as True and num_parallel as half of the number of available threads in ComplexGen/PostProcess/complex_extraction.py.
Phase 3: Geometric refinement
Code of this phase can be compiled only under Windows. If you want to build it under Linux, please follow the Chinese instructions here or here.
<!-- as I fail to compile and link [clapack](https://netlib.org/clapack/) with our project under Linux. Sorry for the inconvenience :-( -->Environment setup
libigl and Eigen are needed, you can install them via vcpkg
$ vcpkg.exe integrate install
$ vcpkg.exe install libigl
$ vcpkg.exe install eigen3
Compile and build
The C++ project can be generated with CMake:
$ cd PATH_TO_COMPLEXGEN\GeometricRefine
$ mkdir build
$ cd build
$ cmake ..
Then you can build GeometricRefine.sln with Visual Studio. After that, you'll find GeometricRefine.exe under PATH_TO_COMPLEXGEN/GeometricRefine/Bin.
To conduct geometric refinement for all models, please first modify .\scripts\geometric_refine.py by setting 'pc_ply_path' as the path containing the input point cloud stored in .ply format, and setting 'complex_path' as the path containing the results of complex extraction, then run:
$ cd PATH_TO_COMPLEXGEN
$ python .\scripts\geometric_refine.py
If you are processing noisy/partial data, please replace the second command with:
$ python .\scripts\geometric_refine.py --noise
You will find the generate json files under 'complex_path'. Description of the generated json fil
