GazeTrackingOfflineProcessing
No description available
Install / Use
/learn @eyetrackingDB/GazeTrackingOfflineProcessingREADME
GazeTrackingOfflineProcessing
Copyright (C) 2014-2015, André Pomp andre.pomp@rwth-aachen.de
Copyright (C) 2014-2015, Jó Ágila Bitsch jo.bitsch@comsys.rwth-aachen.de
Copyright (C) 2014-2015, Oliver Hohlfeld oliver.hohlfeld@comsys.rwth-aachen.de
Copyright (C) 2014-2015, Chair of Computer Science 4, RWTH Aachen University, klaus@comsys.rwth-aachen.de
ABOUT
The GazeTrackingOfflineProcessing Framework is an extension to the GazeTrackingFramework. It allows to evaluate videos that were created with the GazeTrackingFramework offline on a Computer that has much more computation power than the mobile devices. Hence, the processing is much faster. As gaze tracking algorithm, we use the same modified version of EyeTab as in the GazeTrackingFramework including, e.g., the pupil detection of EyeLike. Therefore, the code that was used in this framework is build on the same code as the one of the GazeTrackingFramework. If you extend the GazeTrackingFramework with an additional eye tracking algorithm, you can also extend this framework so that you can do offline processing as well. For more information, we relate to the master thesis (download) during which this frame work was developed.
FREE SOFTWARE
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.
IF YOU NEED ANOTHER LICENSE
If you are planning to integrate NormMaker into a commercial product, please contact us for licensing options via email at:
jo.bitsch@comsys.rwth-aachen.de
Requirements
This project requires:
All other libraries are included.
Used Libraries
This project bases on different OpenSource libraries. Please respect their license if you use it. You can find copies of their licenses in the "licenses" folder.
- EyeTab GazeTracking click
- EyeLike Pupil Detection click
- OpenCV click
- TBB Library click
- Eigen click
- FFMPEG click
HOW TO USE THIS SOFTWARE
- Download the tools that are listed under requirements and install them
- Clone the git repository
- Build the project using CMake (build script included in the build folder)
- Load recorded gaze tracking data folders either from the gaze tracking database or from the device where you recorded them
- Start the post-processing by specifying, according to your needs, the following attributes:
- --input <filepath>: The filepath to the folder where the video files and settings are located
- --output <foldername>: The folder name for the output (will be created inside the input folder)
- --eyealg <algorithm>: The algorithm that is used for detecting the eye center. Possible Values: grad, isoph, comb
- --gazealg <algorithm>: The algorithm that is used for detecting the gaze. Possible Values: approx, geo
- --fastwidth: The window size of the scaled window that is used for detecting the eye center. (optional) Default: 50 for grad, 80 for isoph algo
- --convertfps: Indicates that we want to convert FPS (optional) Default: off
- --drawonvideo: Indicates that we want to draw the gaze points on the recorded video (optional) Default: off
- --drawonscreen: Indicates that we want to draw the gaze points on the recorded screen (optional but requires --convertfps) Default: off
For more information, we relate to the master thesis (download) during which this frame work was developed.
Related Skills
node-connect
353.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
353.3kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
353.3kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。