izpis_h1_title_alt

Pasti pri ocenjevanju točnosti klasifikatorjev
ID Tupkušić, Mirza (Author), ID Blagus, Rok (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (4,19 MB)
MD5: D557EEB5E490C444A63C09A20E735209

Abstract
Zaradi raznolikosti praktičnih nalog, pomanjkanja izkušenj in znanja pogosto naletimo na težave, ki jih sploh ne opazimo. Glede na konkreten problem lahko napake zanemarljivo vplivajo na pravilnost rezultatov ali precej škodljivo. V vsakem primeru, ker neustrezno ravnanje s praktičnimi problemi neizogibno vpliva na statistične sklepe, je vedno treba biti pozoren na pravilnost uporabljenih postopkov. V ta namen so predstavljene štiri pasti pri izgradnji in ocenjevanju točnosti modelov, ki se uvrščajo med najpogostejše naloge, pri katerih prihaja do napak v praksi. Prikazani so rezultati treh statističnih metod na simuliranih ter realnih podatkih. Pri prvi pasti, ki se nanaša na izbiro spremenljivk, smo pokazali, da so modeli v simuliranih in realnih primerih ob izbiri spremenljivk pred uporabo navzkrižnega preverjanja za oceno točnosti pogosto izkazovali prekomerno optimistično delovanje. Analiza različnih podatkovnih prostorov je pokazala zanemarljiv vpliv pristranskosti na modele, zgrajene v nizkorazsežnem prostoru in precejšen v visokorazsežnem prostoru. Rezultati so pokazali, da lahko izbira spremenljivk pred uporabo navzkrižnega preverjanja tudi v primeru, ko ni dejanske razlike med skupinami, ustvari modele z oceno točnosti enaki idealni. Kar se tiče druge pasti, ki je povezana z optimizacijo vrednosti parametrov, so rezultati pokazali podoben vpliv pristranskosti na oceno točnosti modelov, vendar v manjši meri kot pri izbiri spremenljivk. Rezultati tretje pasti, vezani na napačno oceno točnosti izbranih modelov, v primerjavi s prejšnjima dvema, poleg negativnega vpliva napačne izvedbe poudarjajo na pomembnost informativnosti podatkov. Z večanjem informativnosti podatkov se namreč zmanjšuje pristranskost. V simulacijah je predstavljeno ugnezdeno navzkrižno preverjanje kot postopek za izbiro modela in nepristransko oceno točnosti. Ugnezdeno navzkrižno preverjanje, kljub dolgotrajni izvedbi v povprečju zagotavlja pravilne ocene točnosti. Rezultati simulacij enako razkrivajo, da enkratno izvajanje ugnezdenega navzkrižnega preverjanja ne zagotavlja pravilne ocene. Rezultati četrte pasti, vezani na uravnoteženje podatkov, enako prikazujejo pristranskost, ki nastane zaradi uravnoteženja podatkov pred uporabo navzkrižnega preverjanja, kot tudi uspešnost različnih popravkov neravnotežja na izboljšanje točnosti modelov. Ob upoštevanju vseh mer točnosti podvzorčenje deluje bolje kot SMOTE in prevzorčenje. V simulacijah in večini realnih primerov popravki niso imeli pomembnejšega vpliva na uspešnost metod kot navadno neupoštevanje problema neravnotežja s popravkom praga uvrščanja.

Language:Slovenian
Keywords:uvrščanje podatkov, pasti naloge uvrščanja, preoptimistične ocene, statistične metode, ugnezdeno navzkrižno preverjanje, popravki neravnotežja
Work type:Master's thesis/paper
Organization:FE - Faculty of Electrical Engineering
Year:2024
PID:20.500.12556/RUL-158423 This link opens in a new window
COBISS.SI-ID:199607555 This link opens in a new window
Publication date in RUL:10.06.2024
Views:388
Downloads:93
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Pitfalls in assesing the accuracy of predictive models
Abstract:
Due to the variety of practical tasks, lack of experience and knowledge, we often encounter problems that we do not notice at all. Depending on the specific problem, mistakes can have a negligible effect on the validity of the results, or quite harmful. In any case, since improper handling of practical problems inevitably affects statistical conclusions, it is always necessary to pay attention to the correctness of the procedures used. For this purpose, four pitfalls in the construction and evaluation of model accuracy are presented, which are among the most common tasks where mistakes occur in practice. The presentation shows the results of three statistical methods on simulated and real data. In the first pitfall, which relates to variable selection, we showed that models in simulated and real-world cases often showed over-optimistic performance when variable selection was performed before using cross-validation for performance assessment. Analysis of different data spaces mostly showed a negligible impact of bias on models built in low-dimensional space, and significant in high-dimensional space. The results showed that selecting variables before applying cross-validation can produce a model with ideal performance score even in the case where there is no actual difference between groups. As for the second pitfall, which is related to the optimization of model parameters, the results showed a similar impact of bias on the assessment of model performance, but to a lesser extent than in the selection of variables. The results of the third pitfall, related to the incorrect assessment of selected models, compared to the previous two, besides the negative impact of the incorrect implementation, underscore the importance of data informativeness. As data informativeness increases, bias decreases. In the simulations, nested cross-validation is presented as a procedure for model selection and unbiased performance estimation. Nested cross-validation, despite its time-consuming execution, on average ensures correct performance estimation. Simulation results also reveal that a single run of nested cross-validation does not promise a correct estimate. The results of the fourth pitfall, related to data balancing, similarly show the bias introduced by balancing data before applying cross-validation, as well as the effectiveness of various imbalance corrections in improving model performance. Taking into account all model measures, undersampling achieves the best performance compared to SMOTE and oversampling. In simulations and most real cases, corrections did not have a significant impact on the success of methods compared to simply ignoring the imbalance problem with a classification threshold correction.

Keywords:data classification, pitfalls of classification tasks, overoptimistic assessments, statistical methods, nested cross-validation, imbalance corrections

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back