Nalamap
NaLaMap is an open-source application to create webmaps and analyze geospatial data using natural language.
Install / Use
/learn @nalamap/NalamapREADME
NaLaMap
NaLaMap is an open-source platform that helps users find and analyze geospatial data in a natural way. It combines modern web technologies with AI capabilities to create an intuitive interface for interacting with geographic information.
Features
- Upload and display vector data on a map.
- Geocode Locations using OSM and GeoNames (e.g. hospitals, schools, roads, railways). Intelligent geometry filtering ensures queries return the correct feature types (e.g., road segments instead of bus stops). See OSM Geometry Filtering Documentation.
- Find and integrate data from existing Open Data Portals or own databases.
- Chat with AI-agent to retrieve information on data content and quality.
- Multi-Provider LLM Support: Choose from OpenAI, Azure OpenAI, Google Gemini, Mistral AI, DeepSeek, Anthropic, Moonshot (Kimi), or xAI (Grok).
- Semantic OSM Geocoding: Optional embedding-powered tag search for improved geocoding accuracy. Supports offline hashing embeddings (default), OpenAI, or Azure embedding models.
- MCP Support: Experimental Model Context Protocol integration for extending the AI agent with external tools.
- AI-assisted map and layer styling.
- Automated Geoprocessing using natural language (e.g buffer, centroids, intersections).
- Create and share GIS-AI-Applications for people without geodata expertise based on custom use-cases, processing logic and data-sources.
- Flexible Extension Possibilities of Toolbox e.g. for including document or web-search
- Color Customization: Customize the application's color scheme to match corporate branding or personal preferences. See Color Customization Guide.
Versioning Strategy
NaLaMap follows Semantic Versioning for all releases using the format MAJOR.MINOR.PATCH:
- MAJOR version increments for incompatible API changes, significant architectural changes, or breaking changes to existing functionality
- MINOR version increments for new features, enhancements, or backwards-compatible functionality additions (e.g., new geospatial tools, additional data sources, UI improvements)
- PATCH version increments for backwards-compatible bug fixes, security patches, and minor improvements
Release Tags: All releases are tagged in Git using the format v{MAJOR}.{MINOR}.{PATCH} (e.g., v1.0.0, v1.2.3).
Pre-release versions may use suffixes like -alpha, -beta, or -rc for testing purposes (e.g., v1.1.0-beta.1).
Current Version: The project is currently in active development. The first stable release will be tagged as v1.0.0 once core functionality is complete and thoroughly tested.
Project Structure
nalamap/
├── backend/ # Python FastAPI backend
│ ├── api/ # API endpoints
│ ├── core/ # Core configurations
│ ├── models/ # Data models
│ ├── services/ # Business logic services
│ │ ├── agents/ # AI agent implementations
│ │ ├── ai/ # AI service providers
│ │ ├── database/ # Database connectors
│ │ └── tools/ # Utility tools
│ └── main.py # Application entry point
├── frontend/ # Next.js frontend
│ ├── app/ # Next.js application
│ │ ├── components/ # React components
│ │ ├── hooks/ # Custom React hooks
│ │ └── page.tsx # Main page component
│ └── public/ # Static assets
└── nginx/ # Nginx configuration for serving the application
📖 For detailed architecture documentation, see ARCHITECTURE.md
🤖 For AI agent development guidelines, see AGENTS.md
Simplified Entitiy Relationship Model
The following model was created to give you a high level overview of how NaLaMap works. It shows an example user-request to change the sytling of a vector layer in the map. <img width="950" height="534" alt="image" src="https://github.com/user-attachments/assets/6a09918a-fbd0-4860-a362-a5d4f55e871a" />
Getting Started
⚙️ Prerequisites
- Git
- Python 3.10+
- Node.js 18+
- Docker & Docker Compose (optional)
- Poetry (for backend)
Quick Setup (Recommended)
Follow these steps to get the application running locally:
1. Clone the Repository
git clone git@github.com:nalamap/nalamap.git
cd nalamap
2. Environment Configuration
Create your environment file:
Create a .env file in the root directory based on the provided .env.example:
cp .env.example .env
Configure your environment variables:
Edit the .env file to include your configuration. The environment file contains several categories of settings:
- AI Provider Configuration: Choose between OpenAI, Azure OpenAI, Google AI, Mistral AI, DeepSeek, Anthropic, Moonshot, or xAI and provide the corresponding API keys
- Embedding Configuration: Choose between lightweight offline hashing (default), OpenAI, or Azure AI embeddings
- Database Settings: PostgreSQL connection details (a demo database is pre-configured)
- API Endpoints: Backend API base URL configuration
- Optional Services: LangSmith tracing for monitoring AI interactions
Map / WMTS Projection Safety:
To avoid rendering projection-mismatched WMTS layers, the backend filters out any WMTS layer that lacks a WebMercator (EPSG:3857 family) TileMatrixSet by default.
Environment variable to control this behavior:
NALAMAP_FILTER_NON_WEBMERCATOR_WMTS (default: true)
Set to false to allow all WMTS layers (may lead to visual misalignment unless tiles are in WebMercator).
Details: see docs/wmts.md.
Note: The
.env.exampleincludes a demo database connection that you can use for testing. For production use, configure your own database credentials.
⚠️ Important: Single Provider Selection
You can only use ONE AI provider at a time. The active provider is determined by the LLM_PROVIDER environment variable. To switch providers, change this value and restart the application.
Supported LLM_PROVIDER values and their models:
| Provider | LLM_PROVIDER Value | Default Model | Model Configuration | Additional Configuration |
|----------|-------------------|---------------|-------------------|--------------------------|
| OpenAI | openai | gpt-4o-mini | OPENAI_MODEL | OPENAI_API_KEY |
| Azure OpenAI | azure | User-defined | AZURE_OPENAI_DEPLOYMENT | AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, AZURE_OPENAI_API_VERSION |
| Google AI | google | gemini-1.5-pro-latest | GOOGLE_MODEL | GOOGLE_API_KEY |
| Mistral AI | mistral | mistral-large-latest | MISTRAL_MODEL | MISTRAL_API_KEY |
| DeepSeek | deepseek | deepseek-chat | DEEPSEEK_MODEL | DEEPSEEK_API_KEY |
| Anthropic | anthropic | claude-4.5-sonnet | — | ANTHROPIC_API_KEY |
| Moonshot | moonshot | kimi-k2.5 | — | MOONSHOT_API_KEY |
| xAI | xai | grok-2-latest | — | XAI_API_KEY |
Example configuration:
# Choose your provider
LLM_PROVIDER=openai
# Configure the model (optional - defaults to recommended model)
OPENAI_MODEL=gpt-4o-mini
# Add the corresponding API key
OPENAI_API_KEY=your_openai_api_key_here
# Note: You only need to configure the provider you're using
🎯 Model Selection: All providers now support configurable model selection via environment variables. If you don't specify a model, NaLaMap uses cost-effective default models optimized for geospatial tasks.
⚙️ Advanced Parameter Customization:
To modify advanced LLM parameters (temperature, max_tokens, timeout, etc.), edit the provider files in backend/services/ai/:
openai.py- OpenAI configurationgoogle_genai.py- Google AI configurationmistralai.py- Mistral AI configurationdeepseek.py- DeepSeek configurationazureai.py- Azure OpenAI configuration
Each file contains a get_llm() function where you can adjust parameters like temperature, max_tokens, max_retries, etc.
3. Setup Database (Required)
NaLaMap requires a PostgreSQL/PostGIS database for user authentication and geospatial processing. The easiest way to run this locally is using Docker.
-
Start the Database Container:
docker-compose up -d db -
Run Database Migrations:
cd backend poetry run alembic upgrade head
Note: If you cannot use Docker, you must install PostgreSQL and PostGIS manually and update
DATABASE_URLin your.envfile.
4. Setup Backend (Python/FastAPI)
# Navigate to backend directory
cd backend
# We recommend poetry config virtualenvs.create true to manage your .venv inside the repo
poetry install
# Start the backend server
poetry run python main.py
The frontend will be available at http://localhost:3000
The backend will be available at http://localhost:8000
- API Documentation:
http://localhost:8000/docs
4. Setup Frontend (Next.js)
Open a new terminal and run:
# Navigate to frontend directory
cd frontend
# Install dependencies
npm i
# Start development server
npm run dev
Alternative: Docker Deployment
If you prefer using Docker:
-
Configure your environment variables as described above.
-
Start the application using Docker Compose:
docker-compose up -
Access the application at
http://localhost:80
Docker Development Environment
For a complete development environment with hot-reload capabilities:
docker-compose -f dev.docker-compose.yml up --build
Technologies Used
Backend
- FastAPI: Modern, fast web framework for building APIs
- LangChain: Framework for developing applications powered by language models
- LangGraph: For building complex AI agent workflows
- OpenAI/Azure/DeepSeek: AI model providers for n
Related Skills
node-connect
342.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
85.3kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
342.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
342.5kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
