izpis_h1_title_alt

Obdelava in analiza besedila z uporabo Matlaba : delo diplomskega seminarja
ID Širca, Klara (Author), ID Knez, Marjetka (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (872,50 KB)
MD5: 5C721EBA31D31EE34B9D86EB21D9DD24
.pdfPDF - Appendix, Download (119,46 KB)
MD5: FAA33B6A2C8A14055E3F24FDBB7780C3

Abstract
Diplomsko delo je posvečeno metodam za obdelavo in analizo naravnega jezika v besedilih. Na začetku je na kratko opisano Matlabovo orodje \emph{Text Analytics Toolbox}, nato pa so predstavljeni osnovni objekti za shranjevanje besedila v Matlabu. Predstavljene so strukturne komponente naravnega jezika in načini, kako se lahko te obdela, da jih razume računalnik. V delu so opisane nekatere metode, kot so tokenizacija, lematizacija in krnjenje. Primer uporabe teh metod v Matlabu je prikazan na prosto dostopni knjigi Zločin in kazen avtorja Dostojevskega. V nadaljevanju je predstavljen Zipfov zakon in nekaj glavnih statističnih značilnosti besedilnih korpusov, ob tem pa so omenjene njihove posledice in te prikazane na primeru. Del diplomskega dela je posvečen tudi geometrijskim modelom. V tem delu so definirani različni načini predstavitve besedilnih zbirk: z matriko izrazov in dokumentov ter \emph{tf-idf} matriko. Prav tako sta opisani matrika sočasnih pojavitev besed in matrika prekrivanja. Matrika izrazov in dokumentov predstavlja vektorski prostor besed in dokumentov, zato so na kratko opisane tudi osnovne značilnosti predstavitve dokumentov v vektorskih prostorih. Predstavljeni so tudi načini za merjenje podobnosti med dokumenti. Peti del se ukvarja z analizo sočasnih pojavitev besed v odstavkih in tu je najprej opisano teoretično ozadje Shannonove entropije in skupne informacije. Na podlagi te teorije je izračunana točkovna skupna informacija za pare besed v knjigi Zločin in kazen. Prikazane in opisane so ugotovitve, ki jih prinaša izračun točkovne skupne informacije. Opisan je tudi Pearsonov $\chi^2$-test neodvisnosti, njegova uporabnost pa je prikazana pri analizi besednih kolokacij. V šestem poglavju so predstavljeni še $n$-gramski modeli: unigramski, bigramski in trigramski. Ti modeli temeljijo na predpostavki Markova. Razloženo je ocenjevanje parametrov v $n$-gramskih jezikovnih modelih, nato pa je opisanih nekaj metod diskontiranja, ki odvzemajo verjetnostno maso dogodkom, ki se pojavijo v učnem korpusu, in jo porazdelijo med še nevidene dogodke. V zadnjem poglavju je opisanih nekaj metod redukcije dimenzij, kot so rezanje in združevanje besedišča ter redukcija dimenzij s pomočjo singularnega razcepa.

Language:Slovenian
Keywords:Matlab, obdelava naravnega jezika, Zipfov zakon, skupna informacija, Pearsonov hi-kvadrat test, $n$-gramski modeli, SVD
Work type:Final seminar paper
Typology:2.11 - Undergraduate Thesis
Organization:FMF - Faculty of Mathematics and Physics
Year:2022
PID:20.500.12556/RUL-141651 This link opens in a new window
UDC:004.4
COBISS.SI-ID:125665795 This link opens in a new window
Publication date in RUL:04.10.2022
Views:1635
Downloads:144
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Text processing and analysis in Matlab
Abstract:
This thesis focuses on methods for processing and analysing natural language in texts. It starts with a brief description of the Matlab \emph{Text Analytics Toolbox}, and then presents the basic objects in Matlab for text processing. It describes the structural components of natural language and how they can be processed in a way that can be understood by a computer. Some methods such as tokenisation, lemmatisation and stemming are presented in the thesis. An example of how these methods are applied in Matlab are demonstrated using the public domain book \emph{Crime and Punishment} by Dostoyevsky. Further, Zipf's law and some of the main statistical properties of text corpora are presented. Their implications are mentioned, and they are illustrated by an example. A part of the thesis also focuses on geometric models. In this part, different ways of representing text collections are defined: with term-document matrix and \emph{tf-idf} matrix. It also describes the word co-occurrence matrix and the overlap matrix. The term-document matrix represents the vector space of words and documents, so the basic features of the representation of documents in vector spaces are briefly described. Methods of measuring the similarity between documents are also presented. In chapter 5 co-occurrences of words in paragraphs are analysed and the theoretical background on Shannon entropy and mutual information is provided. The pointwise mutual information for word pairs in \emph{Crime and Punishment} is calculated based on this theory. The conclusions drawn from the calculation of the pointwise mutual information are presented and described. Pearson’s $\chi^2$-test for independence is also described and its usefulness in the analysis of word collocations is demonstrated. Chapter 6 presents $n$-gram models: unigram, bigram and trigram. These models are based on the Markov assumption. This chapter further explains the estimation of parameters in $n$-gram language models, and then describes discounting methods which remove some probability mass from events that occur in the training corpus and distribute it among unseen events. The last chapter describes some techniques of dimension reduction such as vocabulary pruning and merging, and dimension reduction using the singular value decomposition.

Keywords:Matlab, natural language processing, Zipf's law, mutual information, Pearson's chi-squared test, $n$-gram models, SVD

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back