SkillAgentSearch skills...

3DGEN

It is a dataset about beams

Install / Use

/learn @YANDA-SHAO/3DGEN
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

3DGEN (UPDATING)

A synthetic Mesh-Image-Depth dataset generation mehtod for civil structures

<p align="center"> <img src="./beam_images.png" alt="beam images"> </p>

About this repository

  1. Beam-shaped meshes Generation code. <br>
  2. Simple support beam and cantilever beam mesh generation code.<br>
  3. Code for adjusting the camera pose.<br>
  4. Code for rendering with Pyrender.<br>
  5. Code for rendering with Blender.<br>
  6. Beam-Shaped Structure Dataset: This dataset includes 900 base beam meshes, 36,000 cantilever meshes, and 36,000 simple support beams. These meshes are derived from the base beam meshes. Additionally, depth maps and multi-view images of these meshes have been rendered using Blender. The dataset can be downloaded from the link: <br> Link: https://studentcurtinedu-my.sharepoint.com/:f:/g/personal/19286158_student_curtin_edu_au/EtEpgOtY_39NuN2aE_KzVNoB4z2lCml4SXTYn5ML7TfSmg?e=0oeSXV

IMPORTANT

  1. The repository contains purely Python code.
  2. 3D mesh generation: The mesh generation utilizes Trimesh and supports prebuilt mesh files in formats such as .obj, .ply, and other popular formats.
  3. Mesh rendering can be accessed at: <br> paper: https://arxiv.org/abs/2302.01721. <br> GitHub: https://github.com/TEXTurePaper/TEXTurePaper. <br> Hugging Face Spaces: https://huggingface.co/spaces/TEXTurePaper/TEXTure.
  4. Image rendering: Image rendering can be achieved using either Pyrender(fast/low quality) or Blender(slow/high quality).
  5. For metric depth estimation and 3D surface reconstruction, it's essential to use Blender.
  6. some excellent works can be followed for monocular depth estimation: <br> LeReS: https://github.com/aim-uofa/AdelaiDepth/tree/main/LeReS <br> MiDaS: https://github.com/isl-org/MiDaS <br>
  7. For monocular vision based 3D mesh reconstruction, Pyrender can be used to generate multi-view images.

Dataset of Beams

A synthetic mesh-image dataset for beams in structure engineering The purpose of building up this dataset is for monocular image based 3D beam reconstruction.

The dataset can be download from the link below: 3D assets:

including original beams and defromed beams https://curtin-my.sharepoint.com/:f:/g/personal/19286158_student_curtin_edu_au/EtTbYaoDE1IYhPK5hIrW20cBYrumpMoqbx3Ta_Mt4GZNmA?e=BLNlLq

images and depth maps:

They are located in seven folders, each folder contains depth maps and images

https://curtin-my.sharepoint.com/:f:/g/personal/19286158_student_curtin_edu_au/EpwhpWeMSIpMkC67puGjq4YBOgy4OJxxi7r_pqCE69z8RQ?e=7RzEF0 https://curtin-my.sharepoint.com/:f:/g/personal/19286158_student_curtin_edu_au/EolQcyaEmHJBs8Ux1ugCzTYB4O7TOcOVDeRjAp4Ptjcoyw?e=i8qKzX https://curtin-my.sharepoint.com/:f:/g/personal/19286158_student_curtin_edu_au/EkF6llhVx0NCk1yNSLZW5hABDRozuGPANt8w_zXoglOrvQ?e=DGGKXG https://curtin-my.sharepoint.com/:f:/g/personal/19286158_student_curtin_edu_au/Eh1s2Um0LWpFsgJD97TY_4sBb5znKmSnfpxnMtq1HTGAjw?e=eHtEHJ https://curtin-my.sharepoint.com/:f:/g/personal/19286158_student_curtin_edu_au/EqGiQx8Sn2dMjDpw8Q1iqRoBHXXQvDBuhcTcmrTpc2m1yA?e=VLyFXs https://curtin-my.sharepoint.com/:f:/g/personal/19286158_student_curtin_edu_au/Ensu90YexH9KqvIgmSmx6ZcB2kdnp5ufKe-34esorAwULA?e=Js7LER https://curtin-my.sharepoint.com/:f:/g/personal/19286158_student_curtin_edu_au/Eq46RYYJVcQmabaEJqmLUGEB75hOJjIOmZvmejwoPYSruQ?e=shrSFC

The generative 3D mesh texturing can be found from:

Paper: https://arxiv.org/abs/2302.01721

GitHub: https://github.com/TEXTurePaper/TEXTurePaper

Hugging face: https://huggingface.co/spaces/TEXTurePaper/TEXTure

beams

This folder contains 400 rectangular mesh cuboid beams generated by Trimesh API. The beams are generated based on the different ratio of length, width and height, in which some of them are 'fat' and some of them are 'thin'. More details:

1. Calculate the ratio between length and width: The ratio between the length and width can be expressed as: length : width = 1 : (0.25 to 0.02) This means that the width can vary between 0.25 and 0.02 times the length. And we genrate 20 different width based on the function below:

widths = [max_rate  - (i / (num_rates - 1))**0.85 * (max_rate - min_rate) for i in range(num_rates)].

In this dataset: max_rate = 0.25, min_rate = 0.02, num_rates = 20. 

2. Calculate the ratio between width and height: Based on the width calculated in the previous step, we then determine the ratio between width and height. 20 ratios between the height and width, ranging from 0.05 to 1.5 are generated.

They are evenly distributed as:

np.linspace(0.05, 1.5, 20).

The size of the beam can be found from the file: '/beams/mesh_meta'.

Each beam has 6147 vertices and 12288 face normals.

simple_support_beam

This folder contains 7200 deformed simple support beams. These deformed beams are generated by applying the analytic solution of the simple support beam to the original beams defined in the 'beams' file. The analytic solusion:

# P: the applied load.
# a: the location along the beam where the applied load or force P is acting. 
# E: Young's modulus of the material (E = 210000000000).
# I: the moment of inertia of the cross-sectional shape of the beam.

def simple_support_beam(mesh, Len=1, P, a, E, I):
    for vertic in mesh.vertices:
        x = vertic[0]
        if x < a:
            y = (P * (L - a) * x * (L**2 - x**2 - (L - a)**2)) / (6 * L * E * I)
        else:
            y = (P * a * (L - x) * (L**2 - (L - x)**2 - a**2)) / (6 * L * E * I)
        vertic[1] += y  # Add the deflection to the y-coordinate of the vertex
    return mesh

Before applying the load to the beams, desired loads are estimated. Input a desired displacement, the desired load can be estimated.

# Length: Length of the beam.
# D_dis: desired displacement.
# E: Young's modulus of the material.
# I: the moment of inertia of the cross-sectional shape of the beam.
# a: the location along the beam where the applied load or force P is acting.
# Estimate the desired applied load given desired displacement

def power_estimation(Length, D_dis, E, I, a):
    D_power = int((6 * Length * E * I * D_dis) / (a * (Length - a) * (Length ** 2 - (Length - a) ** 2 - a ** 2)))
    return D_power

The load is applied at two locations along a beam: 0.25 and 0.5. For each of these locations, 9 different loads are applied based on the maximum displacement (0.05) and minimum displacement (0). This is because the displacement of a simple support beam is typically not very severe. In total, there are 18 loads, each of which generates a distinct mesh for the beam.

locs = [0.25, 0.50]
num_mesh = 10
for loc in locs:
    D_power_max = power_estimation(Length, 0.05, E, I, loc)
    D_power_min = 0
    step = (D_power_max - D_power_min) // (num_mesh - 1)
    D_powers = [x for x in range(D_power_min, D_power_max, step)]

The meta information of the deformed beams can be found from file: './dataset/simple_support_beam/deformed_info.json'.

  • The top-level structure is a dictionary, where each key represents a specific beam. Each example within the deformed_info dictionary contains information about different load applying locations along the beam.
    • Each location within an example is represented by a key, with the location value scaled by 100 for clarity.
    • Within each location, there is information about different "loads" or 'powers' applied to the beam.
      • Within each power/load, there is a sub-dictionary containing specific information about the deformed beam.
        • The "Path" key represents the file path where the deformed beam mesh is stored.
        • The "E" key corresponds to Young's modulus.
        • The "I" key represents the moment of inertia.

A tree diagram of the json file is shown below:

root
├─ beam_0000
│  ├─ location 1
│  │  ├─ load 1
│  │  │  ├─ Path
│  │  │  ├─ E
│  │  │  └─ I
│  │  ├─ load 2
│  │  │  ├─ Path
│  │  │  ├─ E
│  │  │  └─ I
│  │  └─ ...
│  ├─ location 2
│  │  ├─ power1
│  │  │  ├─ Path
│  │  │  ├─ E
│  │  │  └─ I
│  │  ├─ power2
│  │  │  ├─ Path
│  │  │  ├─ E
│  │  │  └─ I
│  │  └─ ...
│  └─ ...
├─ beam_0001
│  ├─ ...
│  └─ ...
└─ ...

simple_support_beam_images

The images of each deformed beam are rendered using the Pyrender API and stored in this file.

For each deformed beam, we generated 120 synthetic cameras to capture images (1920x1920) of the deformed beam. The camera locations were randomly sampled from a sphere surrounding the beam. The visualization below shows the possible camera positions:

<p align="center"> <img src="./camera_position.png" alt="camera positions"> </p>

This code below is used to generate random camera poses around a 3D object for rendering purposes.

  • Function rotation_matrix(roll, pitch, yaw): This function generates a rotation matrix given roll, pitch, and yaw angles. The rotation matrix is used to rotate the camera around the object. The rotation is done in the order of roll (rotation about the x-axis), pitch (rotation about the y-axis), and yaw (rotation about the z-axis).
  • Calculating the center and size of the mesh: The center of the mesh is calculated as the mean of the mesh's bounds (the minimum and maximum coordinates of the mesh). The size of the mesh is calculated as the difference between the maximum and minimum coordinates of the mesh.
  • Defining the distance of the camera from the center of the mesh: The distance of the camera from the center of the mesh is set to be the maximum dimension of the mesh plus the z-coordinate of the center. This ensures that the camera is positioned far enough from the object to capture it fully in the frame.
  • Generating random camera poses: The code then enters a loop where it generates 120 random camera poses. For each pose, it randomly selects a yaw and pitch angle, and sets the roll angle to 0. It then calculates the position of the camera (eye) using

Related Skills

View on GitHub
GitHub Stars6
CategoryDevelopment
Updated4mo ago
Forks0

Languages

Python

Security Score

82/100

Audited on Nov 24, 2025

No findings