ログイン
Language:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 02 情報科学
  2. 01 学術雑誌論文

Bidirectional Transformer Reranker for Grammatical Error Correction

http://hdl.handle.net/10061/0002001059
http://hdl.handle.net/10061/0002001059
e2d39c55-9394-4787-9120-93db3bea0a99
アイテムタイプ 学術雑誌論文 / Journal Article(1)
公開日 2025-07-17
タイトル
タイトル Bidirectional Transformer Reranker for Grammatical Error Correction
言語
言語 eng
キーワード
主題Scheme Other
主題 Grammatical Error Correction
キーワード
主題Scheme Other
主題 Seq2seq
キーワード
主題Scheme Other
主題 Reranking
資源タイプ
資源タイプ journal article
アクセス権
アクセス権 open access
著者 Zhang, Ying

× Zhang, Ying

en Zhang, Ying

Search repository
上垣外, 英剛

× 上垣外, 英剛

ja 上垣外, 英剛

ja-Kana カミガイト, ヒデタカ

en Kamigaito, Hidetaka

Search repository
Okumura, Manabu

× Okumura, Manabu

en Okumura, Manabu

Search repository
抄録
内容記述タイプ Abstract
内容記述 Pre-trained sequence-to-sequence (seq2seq) models have achieved state-of-the-art results in the grammatical error correction tasks. However, these models are plagued by prediction bias owing to their unidirectional decoding. Thus, this study proposed a bidirectional transformer reranker (BTR) that re-estimates the probability of each candidate sentence generated by the pre-trained seq2seq model. The BTR preserves the seq2seq-style transformer architecture but utilizes a BERT-style self-attention mechanism in the decoder to compute the probability of each target token using masked language modeling to capture bidirectional representations from the target context. To guide the reranking process, the BTR adopted negative sampling in the objective function to minimize the unlikelihood. During inference, the BTR yielded the final results after comparing the reranked top-1 results with the original ones using an acceptance threshold λ. Experimental results showed that, when reranking candidates from a pre-trained seq2seq model, the T5-base, the BTR on top of T5-base yielded scores of 65.47 and 71.27 F0.5 on the CoNLL-14 and building educational applications 2019 (BEA) test sets, respectively, and yielded 59.52 GLEU score on the JFLEG corpus, with improvements of 0.36, 0.76, and 0.48 points compared with the original T5-base. Furthermore, when reranking candidates from T5-large, the BTR on top of T5-base improved the original T5-large by 0.26 on the BEA test set.
書誌情報 ja : Journal of Natural Language Processing

巻 31, 号 1, p. 3-46, ページ数 44, 発行日 2024-03-15
出版者
出版者 The Association for Natural Language Processing
ISSN
収録物識別子タイプ EISSN
収録物識別子 2185-8314
出版者版DOI
関連タイプ isReplacedBy
識別子タイプ DOI
関連識別子 https://doi.org/10.5715/jnlp.31.3
出版者版URI
関連タイプ isReplacedBy
識別子タイプ URI
関連識別子 https://www.jstage.jst.go.jp/article/jnlp/31/1/31_3/_article
権利
権利情報Resource https://creativecommons.org/licenses/by/4.0/
権利情報 (C) The Association for Natural Language Processing. Licensed under CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/).
著者版フラグ
出版タイプ NA
戻る
0
views
See details
Views

Versions

Ver.1 2025-07-17 07:24:34.752874
Show All versions

Share

Share
tweet

Cite as

Other

print

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR 2.0
  • OAI-PMH JPCOAR 1.0
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX
  • ZIP

コミュニティ

確認

確認

確認


Powered by WEKO3


Powered by WEKO3