FloorPlan2Design
Internship at the Institute of Automation, Chinese Academy of Sciences:A tool that uses deep learning models to automatically convert architectural floor plans into design renderings.
Install / Use
/learn @neptune-T/FloorPlan2DesignREADME
FloorPlan2Design

Overview
Aigc project during my sophomore year internship at the Institute of Automation, Chinese Academy of Sciences
FloorPlan2Design is a project I completed during my internship, focusing on automatically converting building floor plans into detailed design renderings. After reading the ArchiGAN paper, I used the pix2pix conditional GAN (CGAN) framework, which, while not state-of-the-art, is a solid foundation for this image-to-image translation task.
Main Features
-
Automatic Translation: Convert building floor plans into design renderings with minimal human intervention.
-
Proven Framework: Uses pix2pix CGAN as the image-to-image translation model.
-
Customizable Data Pipeline: Includes extensive preprocessing and data management tools to enhance model performance.

Dataset
In order to avoid too much redundancy in downloading data, I only left 4 to 6 images of all the data for demonstration and file location prompts.
I used the CVC-FP dataset for training. This dataset is of very poor quality, but there are no good dataset resources, so I can only do data cleaning and data processing on this dataset.
After downloading the dataset, the file should be placed in the ImagesGT folder. During the preprocessing and training phase, several problems were found and solved:
-
[x] SVG rendering problem: Due to code corruption, most SVG annotations cannot be rendered in browsers or other viewing tools.
-
[x] Room classification: The original SVG file does not clearly classify room types, but uses a general "room" label, which increases the difficulty of distinguishing room types.
-
[x] File naming mismatch: Some images and their corresponding SVG files do not match in name or extension.
-
[x] Unmatched SVGs: There are unrelated images without corresponding SVG files.
-
[x] Duplicate images: Some images are duplicated and unusable.
-
[x] Inconsistent color codes: The color codes in the SVG annotations vary from file to file.
The raw dataset posed significant challenges and required extensive preprocessing. I developed a batch processing script to correct and standardize all SVG annotations. The Python script is thoroughly documented to help understand the functionality of each component.
Run the project
Just download the project and the datasets mentioned above, find the python files in the utils repository that process the datasets, because each file has very clear comments, and use ipynb to run the project after processing all the dirty data. I wrote 3 models for this project to complete the task, ensuring that the internship is successfully completed and each project can run the results.
Visual results
Here are some examples of visual output generated by the model:

Folder structure
Here is an overview of the project structure:
═── Dataset # Dataset used in the project
│ ═── A # Black and white outline image
│ ═── B # Image with color annotations
│ ═── ImagesGT # Original downloaded image
│ ═── Initial_Data
│ │ ═── colour # Image with updated color scheme
│ │ │ ═── 1_windows
│ │ │ ═── 2_all_room
│ │ │ ═── 3_all_wall
│ │ │ ═── 4_kitchen_room
│ │ │ ═── 5_livingroom_room
│ │ │ ═── 6_bathroom_room
│ │ │ ═── 7_bedroom_room
│ │ │ └── final
│ │ ═── final_svg # Final, corrected SVG file
│ │ ═── fix_svg # Temporary SVG correction
│ │ ═── PNG # Converted PNG file
│ │ └── SVG # Original SVG file
│ └── svg_a_b # Final usable SVG data
│ ═── a
│ └── b
═── Training
│ ═── checkpoints # Model checkpoints during training
│ ═── generated_images # Images generated during training
│ │ ═── 1
│ │ ═── 2
│ │ └── 3
│ └── logs # Training logs
└── utils # Utility scripts for SVG modification
└── colour_change # Tool for changing SVG color scheme
Getting Started
- Dataset Preparation:
-
Download the CVC-FP dataset and place the images in the
ImagesGTdirectory. -
Run the preprocessing scripts to correct and normalize the dataset.
- Model Training:
-
Follow the instructions in the
Trainingdirectory to start training the model. -
Checkpoints and logs will be saved automatically.
Acknowledgements
Special thanks to the Institute of Automation, Chinese Academy of Sciences for providing the project internship opportunity.
Related Skills
clearshot
Structured screenshot analysis for UI implementation and critique. Analyzes every UI screenshot with a 5×5 spatial grid, full element inventory, and design system extraction — facts and taste together, every time. Escalates to full implementation blueprint when building. Trigger on any digital interface image file (png, jpg, gif, webp — websites, apps, dashboards, mockups, wireframes) or commands like 'analyse this screenshot,' 'rebuild this,' 'match this design,' 'clone this.' Skip for non-UI images (photos, memes, charts) unless the user explicitly wants to build a UI from them. Does NOT trigger on HTML source code, CSS, SVGs, or any code pasted as text.
openpencil
2.1kThe world's first open-source AI-native vector design tool and the first to feature concurrent Agent Teams. Design-as-Code. Turn prompts into UI directly on the live canvas. A modern alternative to Pencil.
openpencil
2.1kThe world's first open-source AI-native vector design tool and the first to feature concurrent Agent Teams. Design-as-Code. Turn prompts into UI directly on the live canvas. A modern alternative to Pencil.
ui-ux-pro-max-skill
59.8kAn AI SKILL that provide design intelligence for building professional UI/UX multiple platforms
