izpis_h1_title_alt

Učinkovito procesiranje naravnega jezika s Pythonom : diplomsko delo
ID Martinc, Matej (Author), ID Kukar, Matjaž (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (2,35 MB)
MD5: 6E497E02F60F37275424F18E65EE8571
PID: 20.500.12556/rul/e9f02ec1-0b3b-4edc-b62f-d63d8c1599c1

Abstract
V diplomski nalogi smo se v programskem jeziku Python lotili primerjave različnih orodij in knjižnic za procesiranje naravnega jezika. Poleg najbolj razširjene knjižnice NLTK smo raziskali še manj znane knjižnice, kot so SpaCy, PyNLPl, Pattern in TextBlob, ter jih med seboj primerjali na podlagi različnih kriterijev in praktičnih nalog, kot so tokenizacija, lematizacija, korenjenje, oblikoskladenjsko označevanje, izdelava stavčnih dreves, iskanje vzorcev v besedilu, štetje frekvenc besed v besedilu ter izdelava n-gramov. Knjižnice smo med seboj primerjali po kriteriju funkcionalnosti in v praksi preverili, katere metode in orodja za obdelavo naravnega jezika posamezne knjižnice vsebujejo, ter njihovo delovanje predstavili s praktičnimi primeri. Pozorni smo bili tudi na dostop do različnih korpusov besedil in na možnost procesiranja slovenskega jezika. Najpogostejše metode za obdelavo naravnega jezika, kot so tokenizacija, lematizacija in oblikoskladenjsko označevanje, smo med seboj primerjali na podlagi kriterijev hitrosti in točnosti. Nato smo se lotili določanja sentimenta slovenskih tvitov in komentarjev s pomočjo strojnega učenja ter pri tem preizkusili množico klasifikatorjev iz knjižnic NLTK, Pattern in TextBlob ter izmerili njihovo točnost in hitrost. Točnost klasifikacije smo poskusili izboljšati s pomočjo filtriranja na podlagi oblikoskladenjskega označevanja, filtriranja na podlagi TF-IDF, s pomočjo vključitve bigramov v učno množico ter s pomočjo odstranitve URL naslovov in ponavljajočih znakov. Na koncu smo metodo določanja sentimenta s pomočjo strojnega učenja primerjali še z leksikalno metodo. Zaključili smo z ugotovitvami, da je knjižnica PyNLPl zaradi odsotnosti orodij za lematizacijo, korenjenje in oblikoskladenjsko označevanje za procesiranje naravnega jezika neprimerna. Knjižnica NLTK je počasna, a zaradi obsežne dokumentacije in množice alternativnih metod primerna za učenje, odlikuje pa se tudi s fleksibilnostjo, zaradi katere je edina knjižnica, ki nudi delno podporo tudi procesiranju slovenskega jezika. Knjižnici SpaCy in TextBlob sta se izkazali s hitrostjo in točnostjo, a sta precej nefleksibilni in vsebujeta premalo funkcionalnosti. Knjižnica Pattern se je po kriterijih hitrosti in točnosti izkazala za povprečno, odlikuje pa jo predvsem fleksibilnost, množica funkcionalnosti in preprosta uporaba.

Language:Slovenian
Keywords:procesiranje naravnega jezika, Python, lematizacija, tokenizacija, oblikoskladenjsko označevanje, analiza sentimenta
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
Publisher:[M. Martinc]
Year:2015
Number of pages:90 str.
PID:20.500.12556/RUL-72444 This link opens in a new window
COBISS.SI-ID:1536500931 This link opens in a new window
Publication date in RUL:17.09.2015
Views:3132
Downloads:388
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Efficient Natural Language Processing with Python
Abstract:
The thesis deals with a comparison of different tools and libraries for natural language processing in Python programming language. In addition to the most popular library for natural language processing NLTK we thoroughly researched other less known libraries, such as SpaCy, pyNLPl, Pattern and Textblob, and made comparisons between them based on different criteria and practical assignments, such as tokenization, lemmatization, stemming, part of speech tagging, dependency tree building, searching for patterns in text, word frequency counting and n-grams building. The libraries were compared by functionality and their methods and tools for natural language processing were analysed and described in detail. Special attention was paid to library abilities to access different text corpora and processing of Slovenian language. The most common methods of natural language processing, such as tokenization, lemmatization, part of speech tagging, were compared by speed and accuracy. Afterwards we focused on the sentiment analysis of Slovenian tweets and internet comments by machine learning, where we tested classifiers from different libraries and compared them by speed and accuracy. We tried to improve accuracy of classifiers with different methods, such as part of speech filtering, filtering based on TF-IDF, n-grams inclusion and removal of URL's and character repetitions in words. The machine learning approach to sentiment analysis was compared to lexical approach. We came to a conclusion that PyNLPl library lacks the basic methods for natural language processing. NLTK library is slow but suitable for learning because of the huge documentation and a big set of alternative methods. It is also very flexible, which makes it the only library that partially supports processing of Slovenian language. Spacy and TextBlob libraries are very fast and also accurate but they lack flexibility and functionality. Pattern library turned out to be average in terms of speed and accuracy but is pretty flexible, has many functionalities and is easy to use.

Keywords:natural language processing, Python, lemmatization, tokenization, part of speech tagging, sentiment analysis

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back