SkillAgentSearch skills...

Alfanous

Alfanous is an Arabic search engine API provides the simple and advanced search in Quran , more features and many interfaces...

Install / Use

/learn @Alfanous-team/Alfanous
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Cursor

README

Tests

<!-- mcp-name: io.github.Alfanous-team/alfanous -->

Alfanous API

Alfanous is a Quranic search engine API that provides simple and advanced search capabilities for the Holy Qur'an. It enables developers to build applications that search through Quranic text in Arabic, with support for Buckwalter transliteration, advanced query syntax, and rich metadata.

Features

  • Powerful Search: Search Quranic verses with simple queries or advanced Boolean logic
  • Arabic Support: Full support for Arabic text and Buckwalter transliteration
  • Rich Metadata: Access verse information, translations, recitations, and linguistic data
  • Flexible API: Use as a Python library or RESTful web service
  • Faceted Search: Aggregate results by Sura, Juz, topics, and more
  • Multiple Output Formats: Customize output with different views and highlight styles

Quickstart

Installation

Install from PyPI using pip:

$ pip install alfanous3

Basic Usage

Python Library

>>> from alfanous import api

# Simple search for a word
>>> api.search(u"الله")

# Advanced search with options
>>> api.do({"action": "search", "query": u"الله", "page": 1, "perpage": 10})

# Search using Buckwalter transliteration
>>> api.do({"action": "search", "query": u"Allh"})

# Get suggestions
>>> api.do({"action": "suggest", "query": u"الح"})

# Correct a query
>>> api.correct_query(u"الكتاب")

# Get metadata information
>>> api.do({"action": "show", "query": "translations"})

Web Service

You can also use the public web service:

  • Search: http://alfanous.org/api/search?query=الله
  • With transliteration: http://alfanous.org/api/search?query=Allh

Or run your own web service locally (see alfanous_webapi).

Quick Examples

Search for phrases:

>>> api.search(u'"الحمد لله"')

Boolean search (AND, OR, NOT):

>>> api.search(u'الصلاة + الزكاة')    # AND
>>> api.search(u'الصلاة | الزكاة')    # OR
>>> api.search(u'الصلاة - الزكاة')    # NOT

Fielded search:

>>> api.search(u'سورة:يس')           # Search in Sura Yasin
>>> api.search(u'سجدة:نعم')          # Search verses with sajda

Wildcard search:

>>> api.search(u'*نبي*')             # Words containing "نبي"

Faceted search (aggregate by fields):

>>> api.do({
...     "action": "search",
...     "query": u"الله",
...     "facets": "sura_id,juz"
... })

Documentation

API Reference

Core Functions

  • api.search(query, **options) - Search Quran verses
  • api.do(params) - Unified interface for all actions (search, suggest, show, list_values, correct_query)
  • api.correct_query(query, unit, flags) - Get a spelling-corrected version of a query
  • api.get_info(category) - Get metadata information

The underlying Raw output engine is exposed as Engine in alfanous.api (and re-exported from alfanous directly). Use it as a context manager to ensure index resources are properly released:

from alfanous.api import Engine
# or equivalently:
# from alfanous import Engine

with Engine() as engine:
    result = engine.do({"action": "search", "query": u"الله"})

Search Parameters

Common parameters for api.do() with action="search":

  • query (str): Search query (required)
  • unit (str): Search unit - "aya", "word", or "translation" (default: "aya")
  • page (int): Page number (default: 1)
  • perpage (int): Results per page, 1-100 (default: 10)
  • sortedby (str): Sort order - "score", "relevance", "mushaf", "tanzil", "ayalength" (default: "score")
  • reverse (bool): Reverse the sort order (default: False)
  • view (str): Output view - "minimal", "normal", "full", "statistic", "linguistic" (default: "normal")
  • highlight (str): Highlight style - "css", "html", "bold", "bbcode" (default: "css")
  • script (str): Text script - "standard" or "uthmani" (default: "standard")
  • vocalized (bool): Include Arabic vocalization (default: True)
  • translation (str): Translation ID to include
  • recitation (str): Recitation ID to include (1-30, default: "1")
  • fuzzy (bool): Enable fuzzy search — searches both aya_ (exact) and aya (normalised/stemmed) fields, plus Levenshtein distance matching (default: False). See Exact Search vs Fuzzy Search.
  • fuzzy_maxdist (int): Maximum Levenshtein edit distance for fuzzy term matching — 1, 2, or 3 (default: 1, only used when fuzzy=True).
  • facets (str): Comma-separated list of fields for faceted search
  • filter (dict): Filter results by field values

For a complete list of parameters and options, see the detailed documentation.

Advanced Features

Exact Search vs Fuzzy Search

Alfanous provides two complementary search modes that control which index fields are queried.

Exact Search (default — fuzzy=False)

When fuzzy search is off (the default), queries run against the aya_ field, which stores the fully-vocalized Quranic text with diacritical marks (tashkeel) preserved. This mode is designed for precise, statistical matching:

  • Diacritics in the query are significant — مَلِكِ and مَالِكِ are treated as different words.
  • No stop-word removal, synonym expansion, or stemming is applied to the query.
  • Ideal when you need exact phrase matches, reproducible result counts, or statistical analysis.
# Default exact search — only the vocalized aya_ field is used
>>> api.search(u"الله")
>>> api.search(u"الله", fuzzy=False)

# Phrase match with full diacritics
>>> api.search(u'"الْحَمْدُ لِلَّهِ"')
Fuzzy Search (fuzzy=True)

When fuzzy search is on, queries run against both the aya_ field (exact matches) and the aya field (a separate index built for broad, forgiving search). At index time the aya field is processed through a richer pipeline:

  1. Normalisation — shaped letters, tatweel, hamza variants and common spelling errors are unified.
  2. Stop-word removal — high-frequency function words (e.g. مِنْ، فِي، مَا) are filtered out so they do not dilute result relevance.
  3. Synonym expansion — each token is stored together with its synonyms, so a query for one word automatically matches equivalent words.
  4. Arabic stemming — words are reduced to their stem using the Snowball Arabic stemmer (via pystemmer), so different morphological forms of the same root match each other.

No heavy operations are performed on the query string at search time; all the linguistic enrichment lives in the index.

Additionally, for each Arabic term in the query, a Levenshtein distance search is performed against the aya_ac field (unvocalized, non-stemmed). This catches spelling variants and typos within a configurable edit-distance budget controlled by fuzzy_maxdist.

# Fuzzy search — aya_ (exact) + aya (normalised/stemmed) + Levenshtein distance on aya_ac
>>> api.search(u"الكتاب", fuzzy=True)

# Increase edit distance to 2 to tolerate more spelling variation
>>> api.search(u"الكتاب", fuzzy=True, fuzzy_maxdist=2)

# Via the unified interface
>>> api.do({
...     "action": "search",
...     "query": u"مؤمن",
...     "fuzzy": True,
...     "fuzzy_maxdist": 1,
...     "page": 1,
...     "perpage": 10
... })

| fuzzy_maxdist | Behaviour | |---|---| | 1 (default) | Catches single-character insertions, deletions, or substitutions | | 2 | Broader tolerance — useful for longer words or noisy input | | 3 | Maximum supported — use with care as recall increases significantly |

Fuzzy mode is particularly useful when:

  • The user does not know the exact vocalized form of a word.
  • You want morphologically related words to appear in the same result set (e.g. searching كتب also surfaces كتاب, كاتب, مكتوب).
  • You want synonym-aware retrieval without writing explicit OR queries.

Note: pystemmer must be installed for stemming to take effect (pip install pystemmer). If the package is absent the stem filter degrades silently to a no-op, leaving normalisation and stop-word removal still active.

List Field Values

list_values returns every unique indexed value for a given field. Use it to discover the full vocabulary of searchable fields — for example, all available translation identifiers, part-of-speech tags, or root words — before composing a query.

# Get all unique root values in the index
>>> api.do({"action": "list_values", "field": "root"})
# Returns: {"list_values": {"field": "root", "values": [...], "count": N}}

# Discover all indexed translation IDs
>>> api.do({"action": "list_values", "field": "trans_id"})

# Discover all part-of-speech categories for word search
>>> api.do({"action": "list_values", "field": "pos"})

# Retrieve all indexed lemmas on demand (replaces the former show/lemmas)
>>> api.do({"action": "list_values", "field": "lemma"})

Parameters:

  • field (str): The name of the indexed field whose unique values you want (required).

Return value:

A dictionary with a list_values key containing:

  • field — the requested field name.
  • values — sorted list of unique non-empty indexed values.
  • count — length of the values list.

Query Correction

correct_query() uses Whoosh's built-in spell-checker to compare each term in the query against the index vocabulary and replace unknown terms with the closest known alternative. When the query is already valid (all terms appear in the index) the corrected value in the response is identical to the original input.

# Correct a query via the dedicated function
>>> api.correct_query(u"الكتاب")
# Returns:
# {"correct_query": {"original": "الكتاب", "corrected": "الكتاب"}, "error": ...}

# Correct a misspelled / out-of-vocabulary term
>>> ap
View on GitHub
GitHub Stars283
CategoryDevelopment
Updated6d ago
Forks90

Languages

Python

Security Score

100/100

Audited on Mar 29, 2026

No findings