ExpGCN
PyTorch implementation of our paper "Review-aware Graph Convolution Network for explainable recommendation"
Install / Use
/learn @Joinn99/ExpGCNREADME
ExpGCN: Review-aware Graph Convolution Network for explainable recommendation
This is the official PyTorch implementation of our paper:
T. Wei, T. W. S. Chow, J. Ma, and M. Zhao, “ExpGCN: Review-aware Graph Convolution Network for explainable recommendation,” in Neural Networks, 2022. Paper Link
<kbd></kbd>
Requirements
The model implementation ensures compatibility with the Recommendation Toolbox RecBole[^1] (Github: Recbole), and use Numba for high speed nagative sampling.
- Python: 3.8+
- RecBole: 1.0.1
- Numba: 0.55.1+
Data Formulation
The dataset is processed as follows:
<DATASET_NAME>.inter
Include all $\langle user, item, explanation \rangle$ triplet interaction data. Each row contains a $\langle user, item \rangle$ pair, where the triplet data is formulated as
<user_id> <item_id> <explanation_id_1>,<explanation_id_2>...
<DATASET_NAME>.item
Include name of all items. Each row contains an item identified by <item_id>.
<DATASET_NAME>.user
Include name of all users. Each row contains a user identified by <user_id>.
The above three files are placed in folder Data/<DATASET_NAME>. The configuration file can be created in Params/<DATASET_NAME>.yaml. In this repository we have provided Amazon Movies & TV[^2] (AmazonMTV) dataset as an example.
Run
The script run.py is used to run the demo. Train and avaluate ExpGCN on a specific dataset, run
python run.py --dataset DATASET_NAME
How to cite
@article{Wei2023,
title = {ExpGCN: Review-aware Graph Convolution Network for explainable recommendation},
journal = {Neural Networks},
volume = {157},
pages = {202-215},
year = {2023},
issn = {0893-6080},
doi = {https://doi.org/10.1016/j.neunet.2022.10.014},
url = {https://www.sciencedirect.com/science/article/pii/S0893608022004087},
author = {Tianjun Wei and Tommy W.S. Chow and Jianghong Ma and Mingbo Zhao},
keywords = {Explainable recommendation, Recommender system, Graph Neural Network, Multi-task learning, Collaborative filtering},
abstract = {Existing works in recommender system have widely explored extracting reviews as explanations beyond user–item interactions, and formulated the explanation generation as a ranking task to enhance item recommendation performance. To associate explanations with users and items, graph neural networks (GNN) are usually employed to learn node representations on the heterogeneous user–item–explanation interaction graph. However, modeling heterogeneous graph convolution poses limitations in both message passing styles and computational efficiency, resulting in sub-optimal recommendation performance. To address the limitations, we propose an Explanation-aware Graph Convolution Network (ExpGCN). In particular, the heterogeneous interaction graph is divided to subgraphs regard to the edge types in ExpGCN. By aggregating information from distinct subgraphs, ExpGCN is capable of generating node representations for explanation ranking task and item recommendation task respectively. Task-oriented graph convolution can not only reduce the complexity of heterogeneous node aggregation, but also alleviate the performance degeneration caused by the conflicts between task learning objectives, which has been neglected in current studies. Extensive experiments on four public datasets show that ExpGCN significantly outperforms state-of-the-art baselines with high efficiency, demonstrating the effectiveness of ExpGCN in explainable recommendations.}
}
References
Related Skills
node-connect
352.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
352.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
352.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
