izpis_h1_title_alt

Statistična analiza slovenskih jezikovnih korpusov
ID KLJUČEVŠEK, ALEKSANDER (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window, ID Krek, Simon (Co-mentor)

.pdfPDF - Presentation file, Download (561,69 KB)
MD5: 7C63EA28D0E7F2813D279BC796DD8C5D
PID: 20.500.12556/rul/a1880141-3dca-49f2-881a-b9fed53c7177

Abstract
Področje procesiranja naravnega jezika je pomembna in obsežna panoga računalništva, vendar je večina obstoječih orodij razvitih in prilagojenih za obdelavo angleških besedil. Razvili smo orodje za statistično analizo velikih jezikovnih korpusov, ki upošteva značilnosti slovenščine kot močno pregibnega jezika. Današnji besedilni korpusi lahko vsebujejo tudi več milijard besed, zato je bil velik del pozornosti namenjen razvoju učinkovitih paralelnih algoritmov, s katerimi bo moč tako obsežne zbirke v razmeroma kratkem času obdelati tudi na običajnih računalnikih. Z orodjem smo analizirali korpus Gigafida, ki vsebuje 1,2 milijarde besed, na več nivojih: na nivoju besednih nizov, nivoju besed, n-gramov, predpon in končnic ter tudi besedotvorne procese v slovenščini.

Language:Slovenian
Keywords:statistična analiza jezika, jezikovni korpus, Gigafida, paralelni algoritmi
Work type:Undergraduate thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2016
PID:20.500.12556/RUL-85513 This link opens in a new window
Publication date in RUL:15.09.2016
Views:1425
Downloads:318
Metadata:XML RDF-CHPDL DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Statistical analysis of Slovene language corpuses
Abstract:
Natural language processing is an important area of computational linguistics and artificial intelligence . Mostly, its existing applications are developed for and based on English texts. We developed an application for the statistical analysis of large text corpora, which takes into account the unique characteristics of Slovene as a strongly inflected language. Since modern text corpora consist of several billion words, we paid special attention to efficient parallel algorithms that are capable of processing these collections in a relatively short amount of time. We analyzed the Gigafida corpus - consisting of 1.2 billion words - on multiple levels: string level, word level, n-gram level, prefix and suffix level, as well as word formation processes of Slovene.

Keywords:statistical language analysis, text corpus, Gigafida, parallel algorithms

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back