Investigation of Transitivity Relation in Natural Language Inference

Autor(en): Zdebskyi, Petro
Berko, Andrii
Vysotska, Victoria
Herausgeber: Khairova, N.
Hamon, T.
Grabar, N.
Burov, Y.
Stichwörter: Data quality; Data-centric approaches; Language inference; Machine learning; Machine learning models; Modeling accuracy; Modeling architecture; Natural Language Inference; Natural languages; Recognizing Textual Entailment; Recognizing textual entailments; transitive relation
Erscheinungsdatum: 2023
Herausgeber: CEUR-WS
Journal: CEUR Workshop Proceedings
Volumen: 3396
Startseite: 334 – 345
Zusammenfassung: 
Motivation of this work is a data-centric approach of improving model accuracy by improving data quality instead of improving model architecture. The idea is to improve dataset with transitivity relations to help machine learning model learn such dependencies. Alongside with enriching dataset investigate how good is previously trained model in catching such relations. So, basically study can be divided into two main parts investigating dataset and investigating machine learning model trained on such datasets. It was found that the existing model catches transitive dependencies well. It was also investigated that “entailment” relation is more directional that “contradiction” and “neutral”. © 2023 Copyright for this paper by its authors.
Beschreibung: 
Cited by: 0; Conference name: 7th International Conference on Computational Linguistics and Intelligent Systems. Volume II: Computational Linguistics Workshop, CoLInS 2023; Conference date: 20 April 2023 through 21 April 2023; Conference code: 188784
ISSN: 1613-0073
Externe URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85160842281&partnerID=40&md5=9c262ecf49fc244ed906b00d2a52ced4

Zur Langanzeige

Seitenaufrufe

2
Letzte Woche
0
Letzter Monat
1
geprüft am 24.05.2024

Google ScholarTM

Prüfen