MMGCN
MMGCN: Multi-modal Graph Convolution Network forPersonalized Recommendation of Micro-video
Install / Use
/learn @weiyinwei/MMGCNREADME
MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video
This is our Pytorch implementation for the paper:
Yinwei Wei, Xiang Wang, Liqiang Nie, Xiangnan He, Richang Hong, and Tat-Seng Chua(2019). MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video. In ACM MM`19, NICE, France,Oct. 21-25, 2019
Author: Dr. Yinwei Wei (weiyinwei at hotmail.com)
Introduction
Multi-modal Graph Convolution Network is a novel multi-modal recommendation framework based on graph convolutional networks, explicitly modeling modal-specific user preferences to enhance micro-video recommendation. We update the code and use the full-ranking strategy for validation and testing.
Citation
If you want to use our codes and datasets in your research, please cite:
@inproceedings{MMGCN,
title = {MMGCN: Multi-modal graph convolution network for personalized recommendation of micro-video},
author = {Wei, Yinwei and
Wang, Xiang and
Nie, Liqiang and
He, Xiangnan and
Hong, Richang and
Chua, Tat-Seng},
booktitle = {Proceedings of the 27th ACM International Conference on Multimedia},
pages = {1437--1445},
year = {2019}
}
Environment Requirement
The code has been tested running under Python 3.5.2. The required packages are as follows:
- Pytorch == 1.1.0
- torch-cluster == 1.4.2
- torch-geometric == 1.2.1
- torch-scatter == 1.2.0
- torch-sparse == 0.4.0
- numpy == 1.16.0
Example to Run the Codes
The instruction of commands has been clearly stated in the codes.
- Kwai dataset
python main.py --model_name='MMGCN' --l_r=0.0005 --weight_decay=0.1 --batch_size=1024 --dim_latent=64 --num_workers=30 --aggr_mode='mean' --num_layer=2 --concat=False - Tiktok dataset
python main.py --model_name='MMGCN' --l_r=0.0005 --weight_decay=0.1 --batch_size=1024 --dim_latent=64 --num_workers=30 --aggr_mode='mean' --num_layer=2 --concat=False - Movielens dataset
python main.py --model_name='MMGCN' --l_r=0.0001 --weight_decay=0.0001 --batch_size=1024 --dim_latent=64 --num_workers=30 --aggr_mode='mean' --num_layer=2 --concat=False
Some important arguments:
-
model_name: It specifies the type of model. Here we provide three options:MMGCN(by default) proposed in MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video, ACM MM2019. Usage:--model_name='MMGCN'VBPRproposed in VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback, AAAI2016. Usage:--model_name 'VBPR'ACFproposed in Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention , SIGIR2017. Usage:--model_name 'ACF'GraphSAGEproposed in Inductive Representation Learning on Large Graphs, NIPS2017. Usage:--model_name 'GraphSAGE'NGCFproposed in Neural Graph Collaborative Filtering, SIGIR2019. Usage:--model_name 'NGCF'
-
aggr_modeIt specifics the type of aggregation layer. Here we provide three options:mean(by default) implements the mean aggregation in aggregation layer. Usage--aggr_mode 'mean'maximplements the max aggregation in aggregation layer. Usage--aggr_mode 'max'addimplements the sum aggregation in aggregation layer. Usage--aggr_mode 'add'
-
concat: It indicates the type of combination layer. Here we provide two options:concat(by default) implements the concatenation combination in combination layer. Usage--concat 'True'eleimplements the element-wise combination in combination layer. Usage--concat 'False'
Dataset
We provide three processed datasets: Kwai, Tiktok, and Movielnes.
- You can find the full version of recommendation datasets via Kwai, Tiktok, and Movielens. Since the copyright of datasets, we cannot release them directly. To facilate the line of research, we provide some toy datasets[BaiduPan](code: zsye) or [GoogleDriven]. Anyone needs the full datasets, please contact the owner of datasets.
||#Interactions|#Users|#Items|Visual|Acoustic|Textual| |:-|:-|:-|:-|:-|:-|:-| |Kwai|1,664,305|22,611|329,510|2,048|-|100| |Tiktok|726,065|36,656|76,085|128|128|128| |Movielens|1,239,508|55,485|5,986|2,048|128|100|
-train.npy
Train file. Each line is a user with her/his positive interactions with items: (userID and micro-video ID)
-val.npy
Validation file. Each line is a user several positive interactions with items: (userID and micro-video ID)
-test.npy
Test file. Each line is a user with several positive interactions with items: (userID and micro-video ID)
Copyright (C) <year> Shandong University
This program is licensed under the GNU General Public License 3.0 (https://www.gnu.org/licenses/gpl-3.0.html). Any derivative work obtained under this license must be licensed under the GNU General Public License as published by the Free Software Foundation, either Version 3 of the License, or (at your option) any later version, if this derivative work is distributed to a third party.
The copyright for the program is owned by Shandong University. For commercial projects that require the ability to distribute the code of this program as part of a program that cannot be distributed under the GNU General Public License, please contact weiyinwei@hotmail.com to purchase a commercial license.
Related Skills
docs-writer
99.1k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
335.8kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Design
Campus Second-Hand Trading Platform \- General Design Document (v5.0 \- React Architecture \- Complete Final Version)1\. System Overall Design 1.1. Project Overview This project aims t
arscontexta
2.9kClaude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.
