AlignScore
ACL2023 - AlignScore, a metric for factual consistency evaluation.
Install / Use
/learn @yuh-zha/AlignScoreREADME
AlignScore
This is the repository for AlignScore, a metric for automatic factual consistency evaluation of text pairs introduced in
AlignScore: Evaluating Factual Consistency with a Unified Alignment Function
Yuheng Zha, Yichi Yang, Ruichen Li and Zhiting Hu
ACL 2023
Factual consistency evaluation is to evaluate whether all the information in b is contained in a (b does not contradict a). For example, this is a factual inconsistent case:
- a: Children smiling and waving at camera.
- b: The kids are frowning.
And this is a factual consistent case:
- a: The NBA season of 1975 -- 76 was the 30th season of the National Basketball Association.
- b: The 1975 -- 76 season of the National Basketball Association was the 30th season of the NBA.
Factual consistency evaluation can be applied to many tasks like Summarization, Paraphrase and Dialog. For example, large language models often generate hallucinations when summarizing documents. We wonder if the generated text is factual consistent to its original context.
Leaderboards
We introduce two leaderboards that compare AlignScore with similar-sized metrics and LLM-based metrics, respectively.
Leaderboard --- compare with similar-sized metrics
We list the performance of AlignScore as well as other metrics on the SummaC (includes 6 datasets) and TRUE (includes 11 datasets) benchmarks, as well as other popular factual consistency datasets (include 6 datasets).
| Rank | Metrics | SummaC* | TRUE** | Other Datasets*** | Average**** | Paper | Code | | ---- | :--------------- | :-----: | :----: | :------------: | :-----: | :---: | :--: | | 1 | AlignScore-large | 88.6 | 83.8 | 49.3 | 73.9 | :page_facing_up:(Zha et al. 2023) | :octocat: | | 2 | AlignScore-base | 87.4 | 82.5 | 44.9 | 71.6 | :page_facing_up:(Zha et al. 2023) | :octocat: | | 3 | QAFactEval | 83.8 | 79.4 | 42.4 | 68.5 | :page_facing_up:(Fabbri et al. 2022) | :octocat: | | 4 | UniEval | 84.6 | 78.0 | 41.5 | 68.0 | :page_facing_up:(Zhong et al. 2022) | :octocat: | | 5 | SummaC-CONV | 81.0 | 78.7 | 34.2 | 64.6 | :page_facing_up:(Laban et al. 2022) | :octocat: | | 6 | BARTScore | 80.9 | 73.4 | 34.8 | 63.0 | :page_facing_up:(Yuan et al. 2022) | :octocat: | | 7 | CTC | 81.2 | 72.4 | 35.3 | 63.0 | :page_facing_up:(Deng et al. 2022) | :octocat: | | 8 | SummaC-ZS | 79.0 | 78.2 | 30.4 | 62.5 | :page_facing_up:(Laban et al. 2022) | :octocat: | | 9 | ROUGE-2 | 78.1 | 72.4 | 27.9 | 59.5 | :page_facing_up:(Lin 2004) | :octocat: | | 10 | ROUGE-1 | 77.4 | 72.0 | 28.6 | 59.3 | :page_facing_up:(Lin 2004) | :octocat: | | 11 | ROUGE-L | 77.3 | 71.8 | 28.3 | 59.1 | :page_facing_up:(Lin 2004) | :octocat: | | 12 | QuestEval | 72.5 | 71.4 | 25.0 | 56.3 | :page_facing_up:(Scialom et al. 2021) | :octocat: | | 13 | BLEU | 76.3 | 67.3 | 24.6 | 56.1 | :page_facing_up:(Papineni et al. 2002) | :octocat: | | 14 | DAE | 66.8 | 65.7 | 35.1 | 55.8 | :page_facing_up:(Goyal and Durrett 2020) | :octocat: | | 15 | BLEURT | 69.2 | 71.9 | 24.9 | 55.4 | :page_facing_up:(Sellam et al. 2020) | :octocat: | | 16 | BERTScore | 72.1 | 68.6 | 21.9 | 54.2 | :page_facing_up:(Zhang et al. 2020) | :octocat: | | 17 | SimCSE | 67.4 | 70.3 | 23.8 | 53.8 | :page_facing_up:(Gao et al. 2021) | :octocat: | | 18 | FactCC | 68.8 | 62.7 | 21.2 | 50.9 | :page_facing_up:(Kryscinski et al. 2020) | :octocat: | | 19 | BLANC | 65.1 | 64.0 | 14.4 | 47.8 | :page_facing_up:(Vasilyev et al. 2020) | :octocat: | | 20 | NER-Overlap | 60.4 | 59.3 | 18.9 | 46.2 | :page_facing_up:(Laban et al. 2022) | :octocat: | | 21 | MNLI | 47.9 | 60.4 | 3.1 | 37.2 | :page_facing_up:(Williams et al. 2018) | :octocat: | | 22 | FEQA | 48.3 | 52.2 | -1.9 | 32.9 | :page_facing_up:(Durmus et al. 2020) | :octocat: |
* SummaC Benchmark: [Paper] | [Github]. We report AUC ROC on the SummaC benchmark.
** TRUE Benchmark: [Paper] | [Github]. We report AUC ROC on the TRUE benchmark.
*** Besides the SummaC and TRUE benchmarks, we also include other popular factual consistency evaluation datasets: XSumFaith, SummEval, QAGS-XSum, QAGS-CNNDM, FRANK-XSum, FRANK-CNNDM and SamSum. We compute the Spearman Correlation coefficients between the human annotated score and the metric predicted score, following common practice.
**** To rank these metrics, we simply compute the average performance of SummaC, TRUE and Other Datasets.
Leaderboard --- compare with LLM-based metrics
We also show the performance comparison with large-language-model based metrics below. The rank is based on the average Spearman Correlation coefficients on SummEval, QAGS-XSum and QAGS-CNNDM datasets.*
| Rank | Metrics | Base Model | SummEval | QAGS-XSUM | QAGS-CNNDM | Average | Paper | Code | | :--- | :-------------------- | :----------------------------------------------------------- | :------: | :-------: | :--------: | :--: | :----------------------------------------------------------: | :----------------------------------------------------------: | | 1 | AlignScore-large | RoBERTa-l (355M) | 46.6 | 57.2 | 73.9 | 59.3 | :page_facing_up:(Zha et al. 2023) | :octocat: | | 2 | G-EVAL-4 | GPT4 | 50.7 | 53.7 | 68.5 | 57.6 | :page_facing_up:(Liu et al. 2023) | :octocat: | | 3 | AlignScore-base | RoBERTa-b (125M) | 43.4 | 51.9 | 69.0 | 54.8 | :page_facing_up:(Zha et al. 2023) | :octocat: | | 4 | FActScore (modified)** | GPT3.5-d03 + GPT3.5-turbo | 52.6 | 51.2 | 57.6 | 53.8 | :page_facing_up:(Min et al. 2023) | :octocat:* | | 5 | ChatGPT (Chen et al. 2023) | GPT3.5-turbo | 42.7 | 53.3 | 52.7 | 49.6 | :page_facing_up:(Yi Chen et al. 2023) | :octocat: | | 6 | GPTScore | GPT3.5-d03 | 45.9 | 22.7 | 64.4 | 44.3 | :page_facing_up:(Fu et al. 2023) | :octocat: | | 7 | GPTScore | GPT3-d01 | 46.1 | 22.3 | 63.9 | 44.1 | :page_facing_up:(Fu et al. 2023)
Related Skills
node-connect
349.7kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.7kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.7kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
