Callytics
Callytics is an advanced call analytics solution that leverages speech recognition and large language models (LLMs) technologies to analyze phone conversations from customer service and call centers.
Install / Use
/learn @bunyaminergen/CallyticsREADME
Callytics
Callytics is an advanced call analytics solution that leverages speech recognition and large language models (LLMs)
technologies to analyze phone conversations from customer service and call centers. By processing both the
audio and text of each call, it provides insights such as sentiment analysis, topic detection, conflict detection,
profanity word detection and summary. These cutting-edge techniques help businesses optimize customer interactions,
identify areas for improvement, and enhance overall service quality.
When an audio file is placed in the .data/input directory, the entire pipeline automatically starts running, and the
resulting data is inserted into the database.
Note: This is only a v1.1.0 version; many new features will be added, models
will be fine-tuned or trained from scratch, and various optimization efforts will be applied. For more information,
you can check out the Upcoming section.
Note: If you would like to contribute to this repository, please read the CONTRIBUTING first.
</div>Table of Contents
- Prerequisites
- Architecture
- Math And Algorithm
- Features
- Demo
- Installation
- File Structure
- Database Structure
- Datasets
- Version Control System
- Upcoming
- Documentations
- License
- Links
- Team
- Contact
- Citation
Prerequisites
General
Python 3.11(or above)
Llama
GPU (min 24GB)(or above)Hugging Face Credentials (Account, Token)Llama-3.2-11B-Vision-Instruct(or above)
OpenAI
GPU (min 12GB)(for other process such asfaster whisper&NeMo)- At least one of the following is required:
OpenAI Credentials (Account, API Key)Azure OpenAI Credentials (Account, API Key, API Base URL)
Architecture

Math and Algorithm
This section describes the mathematical models and algorithms used in the project.
Note: The mathematical concepts and algorithms specific to this repository, rather than the models used, will be
provided in this section. Please refer to the RESOURCES under the Documentations section for the
repositories and models utilized or referenced.
Silence Duration Calculation
The silence durations are derived from the time intervals between speech segments:
$$S = {s_1, s_2, \ldots, s_n}$$
represent the set of silence durations (in seconds) between consecutive speech segments.
- A user-defined factor:
$$\text{factor} \in \mathbb{R}^{+}$$
To determine a threshold that distinguishes significant silence from trivial gaps, two statistical methods can be applied:
1. Standard Deviation-Based Threshold
- Mean:
$$\mu = \frac{1}{n}\sum_{i=1}^{n}s_i$$
- Standard Deviation:
$$ \sigma = \sqrt{\frac{1}{n}\sum_{i=1}^{n}(s_i - \mu)^2} $$
- Threshold:
$$ T_{\text{std}} = \sigma \cdot \text{factor} $$
2. Median + Interquartile Range (IQR) Threshold
- Median:
Let:
$$ S = {s_{(1)} \leq s_{(2)} \leq \cdots \leq s_{(n)}} $$
be an ordered set.
Then:
$$ M = \text{median}(S) = \begin{cases} s_{\frac{n+1}{2}}, & \text{if } n \text{ is odd}, \\[6pt] \frac{s_{\frac{n}{2}} + s_{\frac{n}{2}+1}}{2}, & \text{if } n \text{ is even}. \end{cases} $$
- Quartiles:
$$ Q_1 = s_{(\lfloor 0.25n \rfloor)}, \quad Q_3 = s_{(\lfloor 0.75n \rfloor)} $$
- IQR:
$$ \text{IQR} = Q_3 - Q_1 $$
- Threshold:
$$ T_{\text{median\_iqr}} = M + (\text{IQR} \times \text{factor}) $$
Total Silence Above Threshold
Once the threshold
$$T$$
either
$$T_{\text{std}}$$
or
$$T_{\text{median\_iqr}}$$
is defined, we sum only those silence durations that meet or exceed this threshold:
$$ \text{TotalSilence} = \sum_{i=1}^{n} s_i \cdot \mathbf{1}(s_i \geq T) $$
where $$\mathbf{1}(s_i \geq T)$$ is an indicator function defined as:
$$ \mathbf{1}(s_i \geq T) = \begin{cases} 1 & \text{if } s_i \geq T \ 0 & \text{otherwise} \end{cases} $$
Summary:
- Identify the silence durations:
$$ S = {s_1, s_2, \ldots, s_n} $$
- Determine the threshold using either:
Standard deviation-based:
$$ T = \sigma \cdot \text{factor} $$
Median+IQR-based:
$$ T = M + (\text{IQR} \cdot \text{factor}) $$
- Compute the total silence above this threshold:
$$ \text{TotalSilence} = \sum_{i=1}^{n} s_i \cdot \mathbf{1}(s_i \geq T) $$
Features
- [x] Speech Enhancement
- [x] Sentiment Analysis
- [x] Profanity Word Detection
- [x] Summary
- [x] Conflict Detection
- [x] Topic Detection
Demo
Plese click for a demo: Callytics Demo

Installation
Linux/Ubuntu
sudo apt update -y && sudo apt upgrade -y
sudo apt install ffmpeg -y
sudo apt install -y ffmpeg build-essential g++
git clone https://github.com/bunyaminergen/Callytics
cd Callytics
conda env create -f environment.yaml
conda activate Callytics
Environment
.env file sample:
# CREDENTIALS
# OPENAI
OPENAI_API_KEY=
# HUGGINGFACE
HUGGINGFACE_TOKEN=
# AZURE OPENAI
AZURE_OPENAI_API_KEY=
AZURE_OPENAI_API_BASE=
AZURE_OPENAI_API_VERSION=
# DATABASE
DB_NAME=
DB_USER=
DB_PASSWORD=
DB_HOST=
DB_PORT=
DB_URL=
Database
In this section, an example database and tables are provided. It is a well-structured and simple design. If you
create the tables
and columns in the same structure in your remote database, you will not encounter errors in the code. However, if you
want to change the database structure, you will also need to refactor the code.
Note: Refer to the Database Structure section for the database schema and tables.
sqlite3 .db/Callytics.sqlite < src/db/sql/Schema.sql
Grafana
In this section, it is explained how to install Grafana on your local environment. Since Grafana is a third-party
open-source monitoring application, you must handle its installation yourself and connect your database. Of course, you
can also use it with Granafa Cloud instead of local environment.
sudo apt update -y && sudo apt upgrade -y
sudo apt install -y apt-transport-https software-properties-common wget
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt install -y grafana
sudo systemctl start grafana-server
sudo systemctl enable grafana-server
sudo systemctl daemon-reload
http://localhost:3000
SQLite Plugin
sudo grafana-cli plugins install frser-sqlite-datasource
sudo systemctl restart grafana-server
sudo systemctl daemon-reload
File Structure
.
├── automation
│ └── service
│ └── callytics.service
├── config
│ ├── config.yaml
│ ├── nemo
│ │ └── diar_infer_telephonic.yaml
│ └── prompt.yaml
├── .data
│ ├── example
│ │ └── LogisticsCallCenterConversation.mp3
│ └── input
├── .db
│ └── Callytics.sqlite
├── .docs
│ ├── documentation
│ │ ├── CONTRIBUTING.md
│ │ └── RESOURCES.md
│ └── img
│ ├── Callytics.drawio
│ ├── Callytics.gif
│ ├── CallyticsIcon.png
│ ├── Callytics.png
│ ├── Callytics.svg
│ └── database.png
├── .env
├── environment.yaml
├── .gitattributes
├── .github
│ └── CODEOWNERS
├── .gitignore
├── LICENSE
├── main.py
├── README.md
├── requirements.txt
└── src
├── audio
│ ├── alignment.py
│ ├── analysis.py
│ ├── effect.py
│ ├── error.py
│ ├── io.py
│ ├── metrics.py
│ ├── preprocessing.py
│ ├── processing.py
│ └── utils.py
├── db
│ ├── manager.py
│ └── sql
│ ├── AudioPropertiesInsert.sql
│ ├── Schema.sql
│ ├── TopicFetch.sql
│ ├── TopicInsert.sql
│ └── UtteranceInsert.sql
├── text
│ ├── llm.py
│ ├── model.py
│ ├── prompt.py
│ └── utils.py
└── utils
└── utils.py
19 directories, 43 files
Database Structure

