VisSimFramework
Test framework and reference implementation of our algorithms relating to the real-time simulation of human vision.
Install / Use
/learn @csobaistvan/VisSimFrameworkREADME
Vision Simulation Test Framework
This is the framework I implemented during my PhD studies for developing and testing our algorithms relating to the real-time simulation of human vision.
Main features of the framework
- 3D scene and camera management.
- Utilizes a custom entity-component system.
- Automatic library and source code discovery.
- On-the-fly update and render graph building.
- Custom command-line configuration implementation.
- Logging with multiple output devices.
- Console, files, in-memory.
- Automatic scoped logging regions.
- A job-system.
- Easy-to-use, thread-based distribution of work.
- Rich editor interface with extensive debugging capabilities.
- Highly customizable and themeable.
- Dockable interface elements..
- GPU object (shader, buffer, texture) inspection.
- On-the-fly entity and material editing.
- Extensive in-memory log inspector.
- Key-frame animations.
- Automatic recording of key-frames based on user interaction.
- Real-time and lock-step playback with optional video output.
- CPU and GPU profiling.
- Automatic, scoped, nested regions.
- Configurable tabular logging of the results.
- Observable on-the-fly in a graphical and tree form.
Rendering
- Implemented using OpenGL 4.
- Custom shader compilation supporting
#includestatements and detailed error reports. - Phong and Blinn-Phong shading models.
- Physically-based shading using the Cook-Torrance model.
- Deferred shading with support for HDR rendering.
- Normal mapping.
- Layered rendering.
- Content and time-adaptive local and global tone mapping.
- Multiple tone mapping operators are supported.
- Multisample anti-aliasing (MSAA).
- Direct and indirect lighting.
- Multiple source types (directional, point, spot).
- Shadow mapping with several different filtering approaches (variance, exponential, moments).
- Voxel global illumination.
- CPU occlusion culling.
- Cubemap-based skyboxes and volumetric clouds.
- Post-process filters:
- Motion blur.
- Fast approximate anti-aliasing (FXAA).
- Debug visualizers for GBuffer and voxel grid contents.
- Color look-up tables.
- Simulation of aberrated vision.
Requirements
Hardware
The framework makes heavy use of compute shaders; therefore, an OpenGL 4.3 compatible video card is required.
For reference, all tests and performance measurements published in our papers were performed on the following system configuration:
- CPU: AMD Ryzen 7 1700X
- GPU: NVIDIA TITAN Xp
- Memory: 32 GBytes
Software
The framework requires the following external software:
- Microsoft Visual Studio
- Tested with v16.6.3.
- SmartCommandlineArguments is recommended (
.jsonfile is auto-generated by build script).
- MATLAB
- Tested with R2020b.
- Required toolboxes:
- Eye reconstruction method of Csoba and Kunkli:
- PSNR computation:
- Python
- Tested with version 3.8.6.
- List of main dependencies, with the version used during our tests in parentheses:
- numpy (1.18.5)
- tensorflow (2.5.0)
- tensorflow_addons (0.13.0)
- keras_tuner (1.0.3)
- humanize (3.1.0)
- matplotlib (3.3.4)
- pandas (1.1.3)
- psutil (5.9.0)
- seaborn (0.11.2)
- tabulate (0.8.9)
- Note that this list is incomplete and only includes the most relevant third-party packages.
Third-party libraries
All third-party libraries are omitted due to file size limitations. The necessary binaries for building with VS 2019 can be downloaded from here. All third-party library files should be placed in the Libraries folder.
Third-party assets
While some of the necessary assets are uploaded along with the source code, most of the third-party meshes and textures are omitted due to file size limitations. They can be downloaded from here, and should be placed in the corresponding subfolders of the Assets folder.
Running the framework
Generating training datasets
The datasets can be generated using Python. To this end, open a terminal, navigate to the Assets/Scripts/Python folder, then use the following commands to generate the datasets:
python eye_reconstruction.py generate aberration_estimator
python eye_reconstruction.py generate eye_estimator
python eye_aberrations.py generate aberration_estimator
python eye_refocusing.py generate refocus_estimator dataset
Each command is responsible for generating a single dataset for the corresponding networks. The datasets used to perform the measurements for our papers can be downloaded from here, and should be placed in the Assets/Scripts/Python/Data/Train folder.
Training is then performed using the following set of commands:
python eye_reconstruction.py train aberration_estimator network
python eye_reconstruction.py train eye_estimator network
python eye_aberrations.py train aberration_estimator network
python eye_refocusing.py train refocus_estimator network
Once finished, the exported files will be available in the Assets/Scripts/Python/Networks folder.
Lastly, the trained networks must be manually exported for use with the C++ framework. To this end, the following commands must be used:
python eye_reconstruction.py export aberration_estimator network
python eye_reconstruction.py export eye_estimator network
python eye_aberrations.py export aberration_estimator network
python eye_refocusing.py export refocus_estimator network
Once finished, the exported files will be available in the Assets/Generated/Networks folder.
Generating build files for the C++ backend
The framework relies on Premake5 to generate the necessary project files. Premake5 is included in the archive; to invoke it, use the following command in the project's main folder:
premake5 --matlab_root=$PATH$ vs2019
where $PATH$ is the path to the MATLAB installation's root folder.
The build script assumes a MATLAB R2020b installation by default (c:/Program Files/MATLAB/R2020b/), so the --matlab_root switch can be simply omitted if such a MATLAB version is present, leading to the following:
premake5 vs2019
After Premake is finished, the generated build files can be found in the Build folder.
Building the C++ backend with Visual Studio
The solution can be opened in Visual Studio and simply built by selecting the desired build configuration. No additional steps are required.
Building the C++ backend with MSBuild
Alternatively, the framework can also be built using MSBuild.
- Open the VS Developer Command Prompt.
- Navigate to the Build folder.
- Build the project using
msbuild \p:Configuration=Release.
Running the C++ backend
From within Visual Studio, the program can be simply started using the Start Debugging option.
The framework uses sensible defaults for the rendering arguments. Overriding these can be done in the following ways:
- If using
SmartCommandLineArguments, the set of active arguments can be set via the extension's window (accessibla viaView/Other Windows). - In the absence of the aforementioned extension, the arguments can be set manually via the project settings window, located under the
Debuggingcategory.
Code organization
Parametric eye model and patternsearch-based eye reconstruction
The entirety of the eye-related MATLAB code base can be found in Assets/Scripts/Matlab/EyeReconstruction, which was built on Optometrika, a third party library for ray tracing optical systems. Note that Optometrika was modified heavily for our specific use case and several parts of the library were removed for brevity.
The most important classes and functions are the following:
- EyeParametric: Builds the parametric eye model; stores the eye parameters, constructs the necessary optical elements, and manages the computation of Zernike aberration coefficients.
- EyeReconstruction: Implements eye reconstruction using
patternsearch, with extensive customizability. - ZernikeLens: A custom aspherical lens with additional surface perturbations controlled using a Zernike surface.
- compute_aberrations: Performs the actual computation of the Zernike aberration coefficients for an input eye model and computation parameters.
The main MATLAB script folder also contains the PSNR computation routine.
The rest
