RelationPrediction
ACL 2019: Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs
Install / Use
/learn @deepakn97/RelationPredictionREADME
Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs
Source code for our ACL 2019 paper: Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs Blog link for this publication.
Requirements
Please download miniconda from above link and create an environment using the following command:
conda env create -f pytorch35.yml
Activate the environment before executing the program as follows:
source activate pytorch35
Dataset
We used five different datasets for evaluating our model. All the datasets and their folder names are given below.
- Freebase: FB15k-237
- Wordnet: WN18RR
- Nell: NELL-995
- Kinship: kinship
- UMLS: umls
Training
Parameters:
--data: Specify the folder name of the dataset.
--epochs_gat: Number of epochs for gat training.
--epochs_conv: Number of epochs for convolution training.
--lr: Initial learning rate.
--weight_decay_gat: L2 reglarization for gat.
--weight_decay_conv: L2 reglarization for conv.
--get_2hop: Get a pickle object of 2 hop neighbors.
--use_2hop: Use 2 hop neighbors for training.
--partial_2hop: Use only 1 2-hop neighbor per node for training.
--output_folder: Path of output folder for saving models.
--batch_size_gat: Batch size for gat model.
--valid_invalid_ratio_gat: Ratio of valid to invalid triples for GAT training.
--drop_gat: Dropout probability for attention layer.
--alpha: LeakyRelu alphas for attention layer.
--nhead_GAT: Number of heads for multihead attention.
--margin: Margin used in hinge loss.
--batch_size_conv: Batch size for convolution model.
--alpha_conv: LeakyRelu alphas for conv layer.
--valid_invalid_ratio_conv: Ratio of valid to invalid triples for conv training.
--out_channels: Number of output channels in conv layer.
--drop_conv: Dropout probability for conv layer.
Reproducing results
To reproduce the results published in the paper:
When running for first time, run preparation script with:
$ sh prepare.sh
-
Wordnet
$ python3 main.py --get_2hop True -
Freebase
$ python3 main.py --data ./data/FB15k-237/ --epochs_gat 3000 --epochs_conv 200 --weight_decay_gat 0.00001 --get_2hop True --partial_2hop True --batch_size_gat 272115 --margin 1 --out_channels 50 --drop_conv 0.3 --weight_decay_conv 0.000001 --output_folder ./checkpoints/fb/out/
Citation
Please cite the following paper if you use this code in your work.
@InProceedings{KBGAT2019,
author = "Nathani, Deepak and Chauhan, Jatin and Sharma, Charu and Kaul, Manohar",
title = "Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
year = "2019",
publisher = "Association for Computational Linguistics",
location = "Florence, Italy",
}
For any clarification, comments, or suggestions please create an issue or contact deepakn1019@gmail.com
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
