izpis_h1_title_alt

Learning of text-level discourse parsing
ID Weiss, Gregor (Author), ID Bajec, Marko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (691,02 KB)
MD5: 711FE553E1AED77D139BF0F416DB107B

Abstract
Understanding the sense of discourse relations that appear between segments of text is essential to truly comprehend any natural language text. Several automated approaches have been suggested, but all rely on external resources, linguistic feature engineering, and their processing pipelines are built from substantially different models. Instead of designing a system specifically for a given language and task, we pursue a language-independent approach for sense classification of shallow discourse relations. In this dissertation we first present our focused recurrent neural networks (focused RNNs) layer, the first multi-dimensional RNN-attention mechanism for constructing sentence/argument embeddings. It consists of a filtering RNN with a filtering/gating mechanism that enables downstream RNNs to focus on different aspects of each argument of a discourse relation and project it into several embedding subspaces. On top of the proposed mechanism we build our FR system, a novel method for sense classification of shallow discourse relations. In contrast to existing systems, the FR system consists of a single end-to-end trainable model for handling all types and specific situations of discourse relations, requires no feature engineering or external resources, can be used almost out-of-the-box on any language or set of sense labels, and can be applied at the word and character level representation. We evaluate the proposed FR system using the official datasets and methodology of CoNLL 2016 Shared Task. It does not fall a lot behind state-of-the-art performance on English, but it outperforms other systems without a focused RNNs layer by 8% on the Chinese dataset. Afterwards we perform a detailed analysis on both languages.

Language:English
Keywords:natural language processing, shallow discourse relations, recurrent neural networks, attention mechanisms, language-independent, no external resources
Work type:Doctoral dissertation
Organization:FRI - Faculty of Computer and Information Science
Year:2019
PID:20.500.12556/RUL-108469 This link opens in a new window
Publication date in RUL:03.07.2019
Views:1246
Downloads:277
Metadata:XML RDF-CHPDL DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:Slovenian
Title:Učenje besedilnega razčlenjevanja diskurzov
Abstract:
Razumevanje smisla diskurznih relacij, ki nastopajo med segmenti besedila, je ključnega pomena za razumevanje kateregakoli besedila v naravnem jeziku. Številni avtomatizirani pristopi so že bili predlagani, vendar so vsi odvisni od zunanjih virov, ročno-idelanih značilk in njihovi cevovodi za procesiranje so izdelani iz bistveno različnih modelov. Namesto izdelave sistema specializiranega za dani jezik in nalogo, mi stremimo k jezikovno-neodvisnemu pristopu za klasifikacijo smisla v plitkih diskurznih relacijah. V pričujoči disertaciji najprej predstavimo naše osredotočene rekurentne nevronske mreže (focused RNNs), ki predstavljajo prvi več-dimenzionalni RNN-pozornostni mehanizem za izdelavo vložitev stavkov/argumentov. Sestavljen je iz filtrirnega RNN z mehanizmom za filtriranje/usmerjanje, ki omogoča sledečim RNN-jem, da se osredotočijo na različne vidike vsakega argumenta diskuzne relacije in ga projecirajo v več vložitvenih podprostorov. Omenjeni mehanizem uporabimo v našem sistemu FR system, ki predstavlja novo metodo za klasifikacijo smisla v plitkih diskurznih relacijah. V nasprotju z obstoječimi sistemi je FR system sestavljen iz enega modela, ki ga je mogoče celostno učiti od začetka-do-kraja, obravnava vse vrste in specifične situacije v diskurznih relacijah, ne potrebuje ročno-izdelanih značilk ali zunanjih virov, se lahko skorajda brez sprememb uporabi na kateremkoli jeziku ali oznakah smisla, in se lahko uporablja tako na ravni besed kot na ravni znakov. Predlagani FR system smo ovrednotili na uradnih podatkovnih zbirkah in po metodologiji izziva CoNLL 2016 Shared Task. Ne zaostaja veliko za najuspešnejšimi sistemi na angleškem jeziku, vendar presega ostale sisteme brez focused RNNs plasti za 8% na kitajski podatkovni zbirki. Nato smo izvedli natančnejšo analizo na obeh jezikih.

Keywords:procesiranje naravnega jezika, plitke diskurzne relacije, rekurentne nevronske mreže, mehanizmi pozornosti, jezikovna neodvisnost, brez zunanjih virov

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back