SkillAgentSearch skills...

Chatbot

Conversational AI chatbot with consistent persona. Implementation of Seq2Seq, Transformer and Multiple Encoders in Python. Backend using Flask.

Install / Use

/learn @psyfb2/Chatbot
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Chatbot

Conversation AI Chatbot with a focus on keeping a consistent set of personality facts throughout conversation. Models trained include:

  • Sequence to Sequence
  • Transformer
  • Multiple Encoders

This project consists of a Flask Website and Command Line Interface.
Watch the demo: https://www.youtube.com/watch?v=1Prv4J6LOuM&feature=youtu.be

Installation

To install the Command Line Interface, first install python3 and then use the following steps:

git clone https://github.com/psyfb2/Chatbot.git
cd Chatbot/chatbot
pip install -r requirements.txt

GloVe embedding file is not included in this github repository because of it's large size. Download GloVe.6B.zip and extract glove.6B.300d.txt into /chatbot/data folder.

Command Line Interface

The CLI is responsible for training, evaluating and interacting with the models.
Model choices include:

  • seq2seq
  • deep_seq2seq
  • multiple_encoders
  • deep_multiple_encoders
  • transformer

One of these should be passed to either the train, eval or talk arguments.

| Argument Name | Description | Default Value | |:-----------------------:|:----------------------------------------------------------------:|:-----------------:| | train | Name of the model to train | None | | batch_size | Training batch size | 64 | | epochs | Max number of training epochs | 100 | | early_stopping_patience | Number of epochs to run without best validation loss decreasing | 7 | | segment_embedding | Use segment embedding? | True | | perform_pretraining | pretrain models on Movie and Daily Dialog datasets? | False | | verbose | Display loss for each batch? | 0 | | min_epochs | Number of epochs to run regardless of early stopping | 30 | | glove_filename | Name of the glove file to use in the data folder | glove.6B.300d.txt | | | | | | eval | Name of the model to evaluate using Perplexity and F1 | None | | | | | | talk | Name of the model to interact with | None | | beam_width | Beam width to use in beam search | 3 | | beam_search | Use beam search? | True | | plot_attention | Plot attention weights, requires beam_search to be false. | False |

The CLI is at the directory chatbot/models/main.py

Example usage to train Seq2Seq model:
python main.py --train seq2seq
Example usage to evaluate Multiple Encoders model:
python main.py --eval multiple_encoders
Example usage to interact with Transformer model:
python main.py --talk transformer

Note that trained models are not included in this github repo because of their large size.
Trained models can be found here, just put them in the chatbot/saved_models folder:
Trained Models

Evaluation results can be found at chatbot/models/results.txt

Website

The models are deployed using a Flask backend.
The live version can be found here (currently down but will be up again soon): Live Version

Related Skills

View on GitHub
GitHub Stars13
CategoryCustomer
Updated10mo ago
Forks1

Languages

Python

Security Score

67/100

Audited on May 24, 2025

No findings