AIDA
Annotation of Image Data by Assignment.
Install / Use
/learn @alanaberdeen/AIDAREADME
AIDA
See a demo
Play with a live example here
The Basic Idea
AIDA is an attempt to bring an open source web-based work-flow to image annotation. Currently, in the biomedical imaging space, image annotation is largely confined to single computer shrink-wrapped software with limited interactive capabilities and few, usually closed, data formats.
AIDA is a web interface that enables distributed teams of researchers to directly annotate images with easy to use on screen drawing tools. AIDA supports the creation of well defined annotation trials which include a series of high resolution images and a specific set of annotation tasks.
For documentation and further information see the Wiki.
How has it been implemented?
The user interface is a React NextJS Single Page Application, encapsulating and interacting with OpenLayers to provide the images and drawing functionality. Tailwind is the css framework.
AIDA reads and writes data in a simple JSON structure over a web API.
What's planned?
The next stage of development will be to integrate intelligent tools that leverage the power of machine learning techniques. We hope to enhance the ability of the user to quickly and accurately mark up images through predictive assistance.
License
The software is published as Open Source under the permissive MIT license.
Run Locally
You can use AIDA on your local machine. The only requirement NodeJS.
- Clone the repository
- Install the dependencies via NPM
npm install - Run the build script
npm run build - Add the images you want to annotate to the
/local/data/directory. - Run the local server application via
npm run start - Navigate to the localhost webserver specified in the console
- Annotations are read from and written to
/local/data/
Example local sever data directory
local
| local.ts
| tsconfig.json
| ...
|
| └──data
| | README.md
| | project.json // AIDA project file (see below for example content)
| | annotation.json
|
| └──image-dz // DeepZoom format 2D image
| | | image.dzi
| |
| | └──image_files
| | |
| | | └──0
| | | | 0_0.jpeg
| | | | 0_1.jpeg
| | | | ...
| | |
| | | └───1
| | | | 0_0.jpeg
| | | | 0_1.jpeg
| | | | ...
| | |
| | | └───...
project.json defines the combination of image and annotation data.
{
"image": "image-dz/image.dzi",
"annotation": "annotation.json"
}
Develop
Requirement NodeJS. Example work-flow:
- Clone the repository
- Install dependencies via
npm install - For development: start a hot-reloading dev server with
npm run start - For deployment: bundle together with
npm run build
Support for tiled images, International Image Interoperability Framework (IIIF)
This removes the requirement for DZI file formats and replaces it with a web-server. At this point it is still a bit experimental.
- Deploy Cantaloupe IIIF server as described here.
- Edit the Cantaloupe configuration file so that
FilesystemSource.BasicLookupStrategy.path_prefixpoints to/local/data. - Cataloupe server must be running at 'localhost:8182'
- Currently only TIFF files are supported.
About
This application was built by Alan Aberdeen and Stefano Malacrino with contributions from Nasullah Khalid Alham and Ramón Casero. It originated at the Quantitative Biological Imaging Group, The University of Oxford.
Related Skills
node-connect
351.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
110.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
351.4kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
351.4kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
