GiraffeCV
Multi-sensor panoramic camera mass production calibration and real-time stitching technology
Install / Use
/learn @stevenlu137/GiraffeCVREADME
Quick navigation
If you want to:
Build a Panoramic Camera:
Please read the "Aquila User Manual" section below to calibrate your camera, and use the publicly released PanoPlayer to browse panoramic videos.
Build a panoramic video application:
For those who don't want to read the documentation, there is nothing better than reading a simple sample program code directly. Please refer to the three sample programs under
-
Projects/UI/PanoPlayer
-
Projects/VideoStitching/VideoStitching/PanoRender/Testers
For more details, please refer to the PanoRender SDK Documentation bellow.
VideoStitching
VideoStitching is a set of software technologies used to build multi-view panoramic cameras. The technology includes the mass production calibration tool Aquila, which is not open source and only releases a limited version of the binary program currently, and the open source high-performance real-time stitching module PanoRenderOGL.
Based on OpenGL technology, it implements an efficient stitching fusion algorithm that can be widely applied to various PCs and embedded environments. The algorithm has two output methods, either directly using OpenGL to draw in the window, or as an image processing module outputting panoramic frame data. This algorithm is one of the most efficient panoramic stitching algorithms in the world to date, and it can realize super high resolution real-time panoramic stitching fusion on ordinary mid-end PCs.
It supports the following features:
- Any number of cameras (Tested up to 24 cameras);
- Any type of camera (ordinary/fisheye) mixing;
- Any camera placement combination (dual fisheye/multi-eye 180 degrees/multi-eye 360 degrees/panoramic PTZ linkage, etc.);
- Instant initialization;
- Main and sub-stream real-time switching;
- Panoramic projection type real-time switching;
- PTZ linkage and partially zooming;
- Panoramic frame multi-resolution real-time output;
- Physical direction <-> panorama pixel coordinate forward and backward projection mapping;
- In VR applications, the algorithm also supports stereoscopic 3D panoramic stitching.
Applicable scenarios and hardware requirements
VideoStitching is mainly designed for the mass production of high-efficiency, high-resolution multi-camera panoramic video cameras, so it is not suitable for some application scenarios such as VR movie production.
It only supports panoramic stitching in the situation where multiple cameras are centered together, the structure between the cameras needs to be compact, and it does not support stitching tasks under the situation where the distance between multiple positions is far. Generally speaking, we require a large overlapping field of view between adjacent cameras(for example, occupying 1/5 of their field of view), but if some special techniques are used, the stitching problem in the case of a small overlapping field of view can also be solved.
Real-time stitching works on the X86/X64 backend platform(In theory, it can run on any platform with OpenGL, but we only apply it to X86/X64 at present).The graphics card performance requirements are extremely low, and the integrated graphics of a common INTEL CPU can meet the stitching tasks of normal resolution.
It's best to synchronize frames and imaging parameters(such as exposure) between cameras by hardware, otherwise the picture consistency is poor and the best visual experience cannot be obtained.
Aquila User Manual
The traditional panoramic image stitching process consists of two parts. One part calculates the geometric and optical parameters between the cameras through image registration, called calibration. The other part projects the original images from each camera onto the stitching surface and fuses them into a panoramic image, i.e., stitching. When the relative attitudes between the cameras are fixed and the optical parameters of the camera itself are fixed, the calibration process only needs to be done once. Therefore, we separate the two processes. At the factory, we carry out a calibration process in a strictly controlled environment. The output parameters are written to the device, and read out directly during stitching. This not only avoids the impact of calibration on the end user's experience but also ensures the stitching effect.
The calibration process is divided into two steps. The first step is to estimate parameters such as lens distortion, focal length, and the position of the optical axis on the sensor by collecting specific patterns shot by each camera. This step is called "internal parameter calibration". The second step is to estimate their relative spatial relationship, such as rotation angle, by collecting scenes of the same distant view shot by adjacent cameras, and calculate the optimal parameters by integrating the relationships between all cameras. This step is called "external parameter calibration". After calibration, the calculated result is a calibration file with only a few KB to tens of KB. This file records all the information for stitching the multi-channel video stream collected by this device into a panoramic view and will be used as the initialization input for PanoRender. We recommend that device manufacturers embed this file into the multi-sensor panoramic camera so that the device parameters can be directly obtained during use to complete the stitching.
The two calibration tasks mentioned above are all completed by the panoramic camera calibration tool Aquila. Aquila is a command-line tool, and this chapter will introduce the purpose and usage of each command in each section.
Equipment and Environment Preparation
System requirements: A PC equipped with an NVIDIA graphics card. The network is smooth, and the panoramic device to be calibrated is connected.
Camera requirements: Focus on clear distant scenes and fix the focal length. After the calibration process starts, you may not adjust the focus or change the camera's hardware structure. In principle, the focus cannot be adjusted after calibration. Changing the focal length will destroy the calibration and cause larger gaps in the stitching.
The internal parameter calibration is carried out indoors. A checkerboard pattern of about 0.6m x 0.6m size is required (it can fill the camera's field of view at a clear close distance of the lens). The surface of the calibration plate needs to be flat, which is crucial to the accuracy of the calibration results.
During the external parameter calibration phase, the algorithm requires an open scene, such as a scene within 100 meters outside the window without obstacles, and complex textures of high-rise buildings or mountains beyond 100 meters.
We require a large field of view overlap between adjacent cameras to accurately complete the external parameter calibration. For the very special situations of no overlap or low overlap, Aquila also provides a solution, but this is not within the publicly released functions, and it is only considered to be applied to special needs.
Internal Parameter Calibration
In the internal parameter calibration, we use each camera to collect about 20-40 unblurred photos from different angles, containing a complete checkerboard, for the algorithm to estimate the internal parameters of each camera. Due to the lens distortion effect, the checkerboard collected is severely distorted, but after the algorithm estimates the internal parameters, the distortion can be removed. (It should be noted that the illustration here is only for demonstration, the actual operation process of the stitching algorithm is much more complicated than the distortion removal here, because it can be noticed that even after distortion removal, there is still residual distortion at the edge of the image, and the field of view has become smaller. The stitching algorithm will solve the problem here with more detailed processing methods)
External Parameter Calibration
The calibration process is carried out for every two adjacent cameras with overlapping fields of view. The open scene is placed in the overlapping field of view of the two cameras. The image collection is done with the relevant command for this pair of cameras. The calibration program will calculate the matching relationship between the two camera images in the overlapping field of view and display it on the screen. If the matching relationship observed by the naked eye meets the following conditions:
- Clear without motion blur;
- Stitching is correct without errors;
- The number of matching points is relatively large.
- Matching points are sufficiently dispersed in the image;
Then use the command to accept this collection and proceed to the next pair of cameras for collection.
##Calibration Example Let's take the four-channel 180-degree panoramic camera as an example to illustrate the use of Aquila.
Create a directory to store calibration tasks.
mkdir ~/calib_tasks
export G_CALIBRATION_TASKS=~/calib_tasks
Launch Aquila.exe.
We have two ways to start a calibration task. One is to start from scratch, and the other is to start from a template.
start from scratch:
createtask taskname:Word
create a new task.
Example:
createtask task0
vertex vertices:List
Create a group of vertices. Cameras are treated as vertices in a graph here, they are equivalent concepts.
Example:
vertex [cam0 cam1 cam2 cam3]
panotype panoTypeName:Word cropSetting:List
Add panotype.
panoTypeName: the name of specified panotype. A list of all available panorama types will be listed in a separate section.
cropSetting: [leftCropRatio rightCropRatio topCropRatio bottomCropRatio] for the unwrapped panoramas, the final panorama needs to be cropped. The crop ratio coefficients of the four edges are set by these four parameters. The default value is [0.0 0.0 0.0 0.0], meaning no cropping.
Example:
panotype ImmersionSemiSphere
panotype UnwrappedCylinder180 [0.1 0.1 0.1 0.1]
**mode sourceMain:Word sourc
