Leamr
A structurally comprehensive dataset of AMR-to-text alignments for coverage of a larger variety of linguistic phenomena, for research related to AMR parsing, generation, and evaluation.
Install / Use
/learn @ablodge/LeamrREADME
LEAMR
LEAMR (Linguistically Enriched AMR, pronounced lemur) Alignments is a data release of alignments between AMR and English text for better parsing and probing of many different linguistic phenomena. We also include our code for the LEAMR aligner. For more details, read our paper.
Austin Blodgett and Nathan Schneider. 2021. Probabilistic, Structure-Aware Algorithms for Improved Variety, Accuracy, and Coverage of AMR Alignments. In Proceedings of the 59th Annual Meeting ofthe Association for Computational Linguistics.
For other useful resouces for AMR research, also take a look at AMR-utils and the AMR Bibliography.
Install
pip install -r requirements.txt
git clone https://github.com//ablodge/amr-utils
pip install ./amr-utils
Data
We release alignment data for AMR Release 3.0 and Little Prince comprising ~60,000 sentences, as well as 350 sentences with gold alignments in leamr_test.txt and leamr_dev.txt.
We release 4 layers of alignments: subgraph, duplicate subgraph, relation, and reentrancy alignments.
For AMR Release 3.0 and Little Prince, as well as our gold test and dev data we release:
<corpus>.subgraph_alignments.json: Each subgraph alignment maps a DAG-shaped subgraph to a single span. We also include duplicate subgraph alignments in this layer with the alignment type "dupl-subgraph". Some AMRs involve a "duplicate" of some part of the graph to represent ellipsis and other phenomena where some part of the meaning is unpronounced. Duplicate subgraph alignments are used to represent these cases.<corpus>.relation_alignments.json: Each relation alignment maps a span to a collection of external edges, where each edge is between two subgraphs aligned in the previous layer. These alignments include argument structures (gave => :ARG0, :ARG1, :ARG2) and single relation alignments (when => :time).<corpus>.reentrancy_alignments.json: Each reentrancy alignment maps a reentrant edge to the span which "triggers" that reentrancy, and is classified with a reentrancy type to account for phenomona like coreference, control, and coordination.
We also release <corpus>.spans.json which species the spans for each sentence, grouping together tokens which are named entities or multiword expressions.
JSON Format
Alignments are released as JSON files.
To read alignments from a JSON file do:
reader = AMR_Reader()
alignments = reader.load_alignments_from_json(alignments_file)
Get Alignments
Anonymized alignments are stored in the folder data-release/alignments. To interpret them, you will need the associated AMR data.
Get AMR Data
You will first need to obtain AMR Release 3.0 from LDC: https://catalog.ldc.upenn.edu/LDC2020T02. Afterwards you can run the following code to unpack the remainder of the data. Make sure to specify <LDC parent dir> as the parent directory of your AMR Release 3.0 data.
wget https://amr.isi.edu/download/amr-bank-struct-v3.0.txt -O data-release/amrs/little_prince.txt
python build_data.py <LDC parent dir>
python unanonymize_alignments.py
LEAMR Aligner
You will need to download spacy and stanza models for English:
python3 -m spacy download en_core_web_sm
python3
import stanza
stanza.download('en')
Run Pre-trained Aligner
First, make sure the param files have downloaded completely:
wget https://github.com/ablodge/leamr/raw/master/ldc%2Blittle_prince.subgraph_params.pkl -O ldc+little_prince.subgraph_params.pkl
wget https://github.com/ablodge/leamr/raw/master/ldc%2Blittle_prince.relation_params.pkl -O ldc+little_prince.relation_params.pkl
wget https://github.com/ablodge/leamr/raw/master/ldc%2Blittle_prince.reentrancy_params.pkl -O ldc+little_prince.reentrancy_params.pkl
For a file of unaligned AMRs for English <unaligned amr file>, you can create alignments by running the following code. The script nlp_data.py does necessary preprocessing and may take several hours to run on a large dataset.
python nlp_data.py <unaligned amr file>
python align_with_pretrained_model.py -t <unaligned amr file> --subgraph-model ldc+little_prince.subgraph_params.pkl --relation-model ldc+little_prince.relation_params.pkl --reentrancy-model ldc+little_prince.reentrancy_params.pkl
Train Aligner
You can set <train file> to 'data-release/amrs/ldc+little_prince' or some other AMR file name. The script nlp_data.py does necessary preprocessing and may take several hours to run on a large dataset.
python nlp_data.py <train file>.txt
python train_subgraph_aligner.py -T <train file>.txt --save-model <model name>.subgraph_params.pkl
python train_relation_aligner.py -T <train file>.txt --save-model <model name>.relation_params.pkl
python train_reentrancy_aligner.py -T <train file>.txt --save-model <model name>.reentrancy_params.pkl
Bibtex
@inproceedings{blodgett-schneider-2021-probabilistic,
title = "Probabilistic, Structure-Aware Algorithms for Improved Variety, Accuracy, and Coverage of {AMR} Alignments",
author = "Blodgett, Austin and
Schneider, Nathan",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.257",
doi = "10.18653/v1/2021.acl-long.257",
pages = "3310--3321"
}
Related Skills
proje
Interactive vocabulary learning platform with smart flashcards and spaced repetition for effective language acquisition.
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
400Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
