izpis_h1_title_alt

UMETNO TVORJENJE ČUSTVENEGA SLOVENSKEGA GOVORA Z UPORABO PRIKRITIH MARKOVOVIH MODELOV
ID Justin, Tadej (Avtor), ID Žibert, Janez (Mentor) Več o mentorju... Povezava se odpre v novem oknu

.pdfPDF - Predstavitvena datoteka, prenos (2,93 MB)
MD5: F5A5A74C15C88B0BD6182F432BBAE4E6
PID: 20.500.12556/rul/7d94e475-ea94-45d6-9911-6dfc446294aa

Izvleček
Govor je med ljudmi najbolj zastopana oblika sporazumevanja, zato ga večkrat opredeljujemo kot človeku najbolj naravno komunikacijo. Ljudje se brez velikega napora z govorom sporazumevamo, učimo ali si predajamo različna sporočila. Komunikacija med ljudmi pa ni osredotočena samo na zvočno sporočilo, temveč ga ljudje dopolnjujemo tudi z neverbalnimi sporočili. Velikokrat govor spremljajo npr. različne kretnje, izrazi na obrazu, stik s pogledom, drža, dotiki itd. Vse našteto pri komunikaciji nezavedno sprejemamo z vsemi čutili, ki so nam v danem trenutku na voljo. Vse informacije prek različnih čutil zberemo in jih obdelamo v možganih, ki nam prav tako nezavedno omogočijo pravilno prepoznavo sporočila in razpoznavanje njegovega konteksta. Neverbalna komunikacija tako pomembno dopolnjuje človeško glasovno sporazumevanje in človeku omogoča razpoznavo dodatnih informacij, ki omogočajo učinkovito razumevanje sprejetega sporočila in hkrati tudi njegovo umestitev v širši kontekst. V naši doktorski disertaciji se posvečamo raziskovanju tvorbe in zaznave glasovnih sporočil. Glasovno sporočilo lahko opredelimo kot izgovarjavo besed v določenem jeziku in tej izgovarjavi pripadajoče neverbalno sporočilo, ki je človeku večkrat skrito. Vseeno ga lahko prejemnik s pozornim poslušanjem razpozna in se nanj tudi primerno odziva. Takšno neverbalno komunikacijo, ki je del akustičnega sporočila, večkrat opisujemo kot t. i. parajezik. Ta je sestavni del glasovnega sporočila, ki ga v govoru lahko grobo razdelimo na posamezne komponente, kot so ritem, ton, intonacija, jezikovni spodrsljaji, poudarki besed, premori in tišina. Vse te komponente, združene z izgovorjenimi besedami, sestavljajo popolno glasovno sporočilo. V parajezik prištevamo tudi paralingvistična stanja govorca. Posebna vrsta teh stanj so tudi čustva. Človek pod vplivom določenega čustvenega stanja svoj govor velikokrat prilagodi in odraža svoje stanje s sebi lastnimi neverbalnimi sporočili. Ljudje se teh sprememb v govoru le redko neposredno zavedamo. Večkrat pa ravno z njihovo pomočjo razberemo pravilni pomen posredovanega sporočila. Prejemnik čustveno obarvanega glasovnega sporočila tako z lahkoto prepozna glasovne prilagoditve, na podlagi katerih, čeprav nezavedno, sporočilo razvrsti v določeno skupino čustvenih stanj sogovornika. Tako nezavedno razvrščanje, kakor tudi nezavedno tvorjenje čustvenih sporočil, je del našega vsakdana, ki vpliva na medsebojno verbalno sporočanje, razumevanje in ne nazadnje tudi na doživljanje sporočil. Besedilno sporočilo z vsemi komponentami parajezika pomeni sporočilo kot celoto in ga ljudje tvorimo in sprejemamo nezavedno in je del našega najbolj naravnega komunikacijskega sredstva. Tako je govor z vsemi prvinami neverbalne komunikacije za človeka eno najbolj naravnih sredstev sporazumevanja, ki ga spontano srečujemo v vsakodnevni komunikaciji. Že od začetka digitalizacije je želja razvijalcev, da bi razvili način. da bi lahko človek komuniciral s stroji na najbolj naraven način, torej z govorom v lastnem jeziku. Govorni dialog med človekom in strojem naj bi potekal čim bolj podobno kot pri medsebojni komunikaciji med ljudmi. Tako stroj kot tudi človek pri tem podajata in sprejemata govorna sporočila. Sprejemanje sporočil pri stroju opredelimo kot problem razpoznavanja govora, tvorjenje govora pa kot sintezo. Obe področji imata veliko skupnih lastnosti, zato sintezo večkrat opisujemo tudi kot inverzen proces razpoznavanju govora. V zadnjem času so raziskovalci dodobra izpopolnili principe in postopke obeh procesov. A kljub temu ljudje s čedalje bolj zmogljivimi stroji, kot so osebni računalniki, pametni telefoni in druge moderne naprave digitalne dobe, še vedno ne komuniciramo s pomočjo govora. Razloge za to je poleg zahtevnega raziskovalnega dela na področju govornih tehnologij mogoče iskati tudi v jezikovni pestrosti. Močna odvisnost sistemov za modeliranje in tvorjenje govora od jezika zahteva raziskovanje njegovih specifik na akustični in leksikalni ravni za vsak jezik posebej. Do časa pisanja te disertacije obstaja le nekaj svetovnih jezikov, ki imajo razvite sisteme za omejen dialog s stroji. Večina drugih jezikov pa je žal še vedno zapostavljenih. Enega izmed razlogov za takšno selekcijo je mogoče iskati v podatkovnih zbirkah, ki so na voljo v posameznem jeziku za implementacijo že razvitih rešitev. Le dobro urejene zbirke govora, ki imajo hkrati tudi dovolj govornega gradiva, je mogoče uporabiti pri graditvi tovrstnih sistemov. V doktorski disertaciji se ukvarjamo z gradnjo sistemov za umetno tvorjenje slovenskega govora. Pri sistemih za tvorjenje govora se osredotočamo na razumljivost in naravnost tvorjenega umetnega govora. Večkrat se izkaže, da umetni govor ni dovolj podoben naravnemu. Zato si raziskovalci prizadevajo razviti sistem, ki bi pripomogel izboljšati predvsem to komponento pri tvorjenju umetnega govora. Če bi za učenje sistema imeli dovolj veliko podatkovno zbirko govora, ki bi odražala vse značilnosti posameznega jezika specifičnega govorca, bi lahko razvili sistem, ki bi bil nedvomno superioren na obeh ravneh preverjanja. Žal tako obsežnih zbirk govora še ni na voljo. Zato so razvijalci sistemov vedno omejeni na delovanje sistemov, ki jih pogojuje zastopanost gradiva v govorni zbirki. Izdelava govornih podatkovnih zbirk je dolgotrajen in drag proces, zato se večkrat delajo manjše podatkovne zbirke, za bolj specifične namene. Za izboljšanje predvsem naravnosti umetnega govora se v zadnjem času v podatkovne zbirke dodajajo informacije, ki označujejo posamezne komponente parajezika, ali pa kar oznake čustvenih stanj govorca. Za namen sinteze si želimo, da bi zbirke vsebovale čim več govornih primerov posameznega govorca. Z modernimi pristopi k tvorjenju govora lahko tako dovolj dobro modeliramo značilnosti posameznikovega govora. Če zbirki dodamo tudi oznako čustvenega stanja, lahko modeliramo tudi to specifičnost, vendar le, če imamo na voljo dovolj posnetkov govora v določenem čustvenem stanju govorca. Pridobivanje potrebne količine čustvenega govora pa ni edini problem pri zbiranju podatkov za zbirko. Ker ni splošnih definicij, ki bi lahko nedvoumno opredelile, kaj je čustveno stanje, je zaznava čustvenih stanj v govoru tako vedno prepuščena subjektivni percepciji posameznika. Zato je težko pričakovati, da bi se ljudje popolnoma strinjali, v katerem čustvenem stanju je govorec, sploh pa tedaj, ko gre za govorca, ki ga ne poznamo. Zato je treba postopek pridobivanja kakovostnih oznak obravnavati kot enega zahtevnejših problemov pri zajemanju čustvenega govora v zbirko, s čimer smo se ukvarjali v doktorski disertaciji. V novi literaturi srečamo dva sodobna principa graditve sistemov za tvorjenje govora, ki se med seboj poglavitno razlikujeta. Prvi je osredotočen na združevanje naravnih govornih segmentov, drugi pa temelji na parametrizaciji in modeliranju govornih segmentov govora. Za prvega je značilno, da lahko tvori umeten govor bolj naravno, saj združuje čiste segmente naravnih posnetkov, drugi pa segmente modelira in iz modelov akustičnih enot tvori umetni govor. Poglavitna razlika pri graditvi teh dveh principov se izkaže v količini materiala, ki je potreben za izgradnjo obeh sistemov. Pri drugem ga za doseganje kakovostnega in razumljivega govora potrebujemo bistveno manj kot pri prvem. Če pa poizkusimo vgraditi v sistem tudi posamezne komponente parajezika ali čustvenih stanj, potrebujemo za realizacijo prvega sistema neprimerno več gradiva, kot pri drugem. Ker so čustvena stanja težko določljiva, lahko pričakujemo, da bomo razpolagali z manjšim naborom kakovostnega čustvenega gradiva. Zato se v disertaciji osredotočamo na graditev sistema za umetno tvorjenje čustvenega slovenskega govora s pomočjo parametričnih modelov govora, ki jih pridobivamo s postopkom prikritih Markovovih modelov (PMM). Princip gradnje sistemov zaradi parametrizacije govora omogoča modeliranje govora na podlagi statističnih modelov, ki jih določamo na podlagi govorne zbirke. S spremembo parametrov statističnih modelov lahko spreminjamo akustične in intonacijske lastnosti govora ter trajanje govora. To počnemo s postopki adaptacije in interpolacije statističnih modelov. V doktorski disertaciji pa smo takšne postopke uporabili tudi za tvorbo emocionalnih stanj govorca. Vsak udejanjeni sistem za umetno tvorjenje govora je treba vrednotiti. Kot smo že omenili, sisteme za umetno tvorjenje govora preverjamo na dveh ravneh. Prva preverja razumljivost, druga pa naravnost umetnega govornega signala. Realiziran čustveni govor lahko preverimo na podoben način, kot je to mogoče storiti pri graditvi čustvene podatkovne zbirke. Vsak realizirani posnetek čustvenega govora ocenijo ocenjevalci, ki s pomočjo vprašalnika podajo svoje mnenje o tem, ali so v posnetku resnično prisotna zahtevana čustvena stanja govorca. Verodostojno preverjanje je mogoče le, če imamo na voljo dovolj ocenjevalcev in dovolj umetno tvorjenih čustvenih govornih signalov. Tak postopek uvrščamo med postopke subjektivnega vrednotenja sistemov. Toda subjektivno preverjanje je drag in dolgotrajen proces. Zato si razvijalci sistemov želijo, da bi udejanjene sisteme lahko preverjali hitreje in bolj objektivno. Do nastanka te disertacije še vedno ni bilo zanesljivega objektivnega postopka, ki bi razvijalcem ponudil hitrejše in bolj učinkovito vrednotenje udejanjenih sistemov čustvenega govora. V doktorski disertaciji se osredotočamo na izdelavo sistema za umetno tvorjenje slovenskega čustvenega govora. Realiziramo vse komponente, ki so potrebne za razvoj parametričnega sistema za umetno tvorjenje govora. S pomočjo modifikacije znanih postopkov na podlagi prikritih Markovovih modelov (PMM) predlagamo postopek, s katerim je mogoče razviti sistem čustvenega slovenskega govora z omejenim naborom čustvenega gradiva. Postopek temelji na statistični analizi kakovosti oznak posnetkov čustvenega govora. S takim pristopom lahko iz manjše količine čustvenega govora izluščimo specifično informacijo, ki jo posamezen govorec izrazi v določenem čustvenem stanju. Pomembno vlogo pri postopku pa ima tudi govorno gradivo, ki odraža nevtralno čustveno stanje. Takega gradiva je ponavadi v čustvenih zbirkah govora največ in pomeni osnovo za graditev čustvenega sistema za umetno tvorjenje govora. Čustveno nevtralno gradivo tako uporabimo za graditev osnovnega statističnega modela z uporabo tehnik PMM. Tehnike prilagajanja omogočajo, da dobro ocenjen statistični model naravnega govora lahko preslikamo v statistični model posameznega čustvenega stanja govorca. S tako pridobljenim modelom lahko tvorimo poljuben in obenem kakovostni umetni govor v tarčnem čustvenem stanju. Naslednja novost, ki jo predstavljamo v disertaciji, je usmerjena k objektivnemu vrednotenju sistemov za umetno čustveno tvorjenje govora. V disertaciji predlagamo postopek, ki temelji na evklidski razdalji med mel-kepstralnimi vektorji značilk originalnih in umetno tvorjenjih posnetkov. Pridobljene razlike vsakega umetno tvorjenega čustvenega posnetka odražajo oceno podobnosti z originalnim posnetkom. Najmanjša razlika določi najbolj podoben posnetek. Če ima originalni posnetek pripisano čustveno oznako, lahko z metodo verifikacije pridobimo avtomatski rezultat, ki odraža, ali je sistem za umetno tvorjenje govora res udejanjil govor, ki je najbolj podoben čustvenemu govoru v originalnem posnetku. V disertaciji prestavljamo novo zbirko čustvenega slovenskega govora, ki smo jo pridobili iz posnetkov slovenskih radijskih iger. Te smo pridobili za označevanje in nadaljnjo obdelavo z dovoljenjem RTV Slovenija. Čeprav gradivo vsebuje igrana čustvena stanja so le-ta po našem prepričanju podobna čustvenim stanjem v spontanem govoru. Razloge za to trditev lahko iščemo v širšem kontekstu besedila in hkrati v dialogih med protagonisti. Nastopajoči igralci predstavijo posamezno vlogo s širokim naborom čustvenih stanj, ki pa se v akustiki in načinu predstavitve odraža kot čustveni govor igralca. Zato pri pristopu nismo omejeni le z eno radijsko igro, temveč lahko zberemo akustično gradivo posameznega igralca ali igralke v več radijskih igrah. Pomemben dejavnik pri zbiranju akustičnega material je tudi kakovost posameznih posnetkov. Radijske igre so v večini posnete s profesionalno opremo, zato so tudi zbrani posnetki dovolj kakovostni za nadaljnjo obdelavo in procesiranje. V disertaciji predstavimo metodologijo za zbiranje čustvenega akustičnega gradiva iz radijskih iger na primeru izbranega govorca in govorke. Z merami ujemanja označevalcev predstavimo problematiko obravnave in zaznave čustvenega stanja pri posamezniku. Z dvakratnim označevanjem podatkovne zbirke z istimi označevalci, v dveh različnih časovnih obdobjih smo pridobili kakovostno označeno gradivo. Obenem smo preverili tudi konsistentnost posameznikove percepcije čustvenih stanj v govoru. Zbranim posnetkom v zbirki poleg transkripcije dodamo tudi čustveno oznako s pripisom ocene, ki odraža kakovost označbe. Prav ta zbirko izpostavi med redke zbirke slovenskega čustvenega govora, ki poleg čustvene oznake posameznega posnetka vsebujejo tudi informacijo kakovosti oznake izraženega čustvenega stanja na posnetku. Doktorska disertacija je razdeljena na šest poglavij. V uvodnem delu predstavimo temo disertacije, opišemo cilje raziskovalnega dela, ki smo si jih zadali na začetku raziskovanja, ter podamo natančnejši pregled vsebine disertacije. V drugem poglavju naše delo umestimo v širše področje govornih tehnologij, obenem pa izpostavimo splošno znane postopke, ki so osnova za razvoj sistemov za umetno čustveno tvorjenje govora. Hkrati poskusimo s širšim vpogledom v obravnavano področje pojasniti izbore poti, ki smo jih uporabili za nastanek te disertacije. Nova zbirka slovenskega čustvenega govora je opisana v tretjem poglavju, kjer natančno opišemo metodologijo njene izdelave. Osredotočimo se na težavnost označevanja čustvenih stanj v govoru, kar poudarimo z rezultati dvakratnega označevanja izbranih čustvenih posnetkov z istimi označevalci v dveh različnih časovnih obdobjih. Dvakratno označevanje nam omogoča tudi poročanje o konsistentnosti označevalcev pri označevanju emocionalnih stanj. Pridobljene oznake analiziramo in podamo objektivno vrednotenje čustvenega govora v zbirki z avtomatskim sistemom za razpoznavanje od govorca odvisnih čustvenih stanj. Četrto poglavje je usmerjeno k opisu predlaganega postopka za tvorjenje umetnega čustvenega govora na podlagi kakovosti oznake čustvenega gradiva. V poglavju najprej predstavimo osnovni znani postopek, ki omogoča tvorbo umetnega čustvenega govora na podlagi modeliranja z modeli PMM. Postopek zaradi preglednosti razdelimo na posamezne dele, saj s tem lahko bolje poudarimo razlike, ki se odražajo pri realizaciji sistema za tvorjenje čustvenih stanj govorca. V naslednjem razdelku nadaljujemo z opisom prilagoditve postopka z uporabo razvite zbirke čustvenega govora, kjer s pridom uporabimo kakovost oznak čustvenega gradiva. Problematiko vrednotenja sistemov za umetno tvorjenje govora predstavimo v petem poglavju. Na tem mestu opišemo znane subjektivne in tudi znane objektivne postopke za vrednotenje sistemov. Posebno pozornost namenimo vrednotenju čustveno obarvanega umetnega govora, kjer predstavimo predlagan postopek za objektivno vrednotenje. Postopek temelji na procesu verifikacije umetno tvorjenjih čustvenih posnetkov govora. V postopku verifikacije primerjamo besedilno odvisne umetno tovorjene signale z njihovimi originali. Če se ciljna in originalna oznaka čustvenega stanja ujemata, lahko umetno tvorjeni posnetek označimo kot najboljši približek originalnemu posnetku. Na koncu poglavja predstavimo pridobljene rezultate vrednotenja razvitega sistema za umetno tvorjenje slovenskega čustvenega govora, ki je bil udejanjen na podlagi čustvenega gradiva v zbirki EmoLUKS. V sklepnem, šestem poglavju ponovno predstavimo pomembnejše izvirne prispevke disertacije in jih poskusimo ovrednotiti. Poglavje zaključimo s predlogi za nadaljnje delo in podamo smernice, ki odražajo naš pogled in spoznanja za potencialne izboljšave sistemov za umetno tvorjenje slovenskega čustvenega govora.

Jezik:Slovenski jezik
Ključne besede:umetno tvorjenje slovenskega čustvenega govora, prikriti Makovovi modeli, prikriti modeli Markova, razpoznavanje čustvenih stanj govorca, govorna podatkovna zbirka EmoLUKS
Vrsta gradiva:Doktorsko delo/naloga
Organizacija:FE - Fakulteta za elektrotehniko
Leto izida:2016
PID:20.500.12556/RUL-86477 Povezava se odpre v novem oknu
COBISS.SI-ID:287152896 Povezava se odpre v novem oknu
Datum objave v RUL:17.10.2016
Število ogledov:2346
Število prenosov:699
Metapodatki:XML DC-XML DC-RDF
:
Kopiraj citat
Objavi na:Bookmark and Share

Sekundarni jezik

Jezik:Angleški jezik
Naslov:SLOVENIAN EMOTIONAL HMM-BASED SPEECH SYNTHESIS
Izvleček:
Speech is the most common type of communication between humans and is often defined as the most natural human form of communication. With little effort, people use speech to communicate, learn and share different messages. However, human communication is not limited merely to the vocal sounds, but is also complemented by nonverbal cues. Speech is often accompanied by various gestures, facial expressions, posture, touch etc. They are perceived unconsciously by all the senses that are available in a given situation. The information thus gathered is collected and processed in the brain, which enables us, just as unconsciously, to interpret the message correctly and recognise its context. This means that nonverbal communication is an important supplement to the human voice communication, enabling the recognition of additional information, which makes it possible to comprehend the message efficiently and place it into context. The doctorial dissertation’s aim is to research the formation and perception of vocal communication. Vocal communication can be defined as the utterance of words in a certain language and the accompanying nonverbal signs, which are often hidden. Nonetheless, with attentive listening the recipient can easily recognize and respond to it accordingly. Nonverbal communication modifies the acoustic message and is frequently described as paralanguage. It is an integral part of vocal communication and can be divided into several components: rhythm, tone, intonation, language slips, word emphases, pauses and silence. The sum of these components combined with the uttered words form the entirety of vocal communication. Another component of paralanguage are the paralinguistic states of the speaker and emotions represent a distinctive part of these states. Speakers who are experiencing various emotional states will often modify their speech accordingly and communicate it with unique nonverbal signs. It is rare for people to actually be aware of how they modify their speech. On the other hand, this is precisely what often helps to recognise the true meaning of the communicated message. The recipient of an emotionally expressed vocal message can thus easily recognise such vocal modifications and classify the message, albeit unconsciously, into a certain group of the interlocutor’s emotional states. This unconscious classification, as well as the unconscious formation of emotional messages, is a part of our day-to-day lives and influences the verbal communication, comprehension and, last but not least, perception of messages. The combination of verbal communication and all of its paralanguage components represents the entirety of a message, which is formed and perceived unconsciously and forms a part of our most natural means of communication. Speech, together with all the elements of nonverbal communication, is thus one of the most natural communication means which is experienced daily and spontaneously. Ever since the beginnings of the digital era, researchers wished to develop a way in which humans and machines could interact most naturally, i.e. by speaking to each other. Such human-machine verbal dialogue ought to reflect interpersonal communication as closely as possible. This means that both machine and human form and receive verbal messages. The reception of messages by machines is defined as the problem of speech recognition, while the formation of speech is defined as speech synthesis. Both fields have many common characteristics and speech synthesis is often described as an inverted process of speech recognition. Recently, the principles and processes involved in both have been significantly refined. However, despite having increasingly more powerful machines such as personal computers, smartphones and other modern-day digital devices, we still do not communicate with them verbally. One reason for this could be language diversity, besides the obviously difficult research work necessitated in the field of speech technologies. The fact that systems for speech modeling and synthesis are highly dependent on the language involved means that specific acoustic and lexical research must be carried out on each language separately. At the time of this writing, there are only a handful of languages for which systems for limited human-machine dialogue have been developed. Unfortunately, the majority of languages still lack such systems. One of the reasons for this could be the absence of individual language databases, which are necessary for the implementation of already developed solutions. Only well annotated and sufficiently large speech databases make the development of such systems possible. The dissertation treats the development of systems for artificial synthesis of Slovenian speech. The main goal of these systems is to produce artificial speech that is understandable and natural. It is often the case that artificial speech does not sufficiently resemble natural speech. Because of this, researchers mostly endeavour to develop a system with improved performance in these categories. If they had access to a speech database which was large enough to reflect all the characteristics of the language of a particular speaker, they would undoubtedly be able to create a superior system. Unfortunately, there are no such databases available at this time. The development of well performing systems is thus held back by the amount of data in speech databases. Because building speech databases is a lengthy and costly process, smaller and more specialized databases are often produced. For the purpose of making artificial speech more natural, a recent trend in database production has been to add labels describing individual paralanguage components and speaker’s emotional states. Speech synthesis requires the analysis of as many utterances by a single speaker as possible. With access to such information, modern approaches to speech synthesis are able to model the characteristics of an individual’s speech reasonably well. If we add emotion labels, we are able to model emotional characteristics as well. Again, however, this is only possible if there are enough samples available of an individual’s speech in a particular emotional state. Additionally, acquiring sufficiently large quantities of emotional speech samples is not the only problem encountered when building a database; there is also the question of clear definition. It is impossible to unambiguously define a speaker’s emotional state. Therefore, its perception is always subjective and dependent on the recipient. People will inevitably vary in their interpretations of emotional states, especially when the speaker is someone they do not know. The acquisition of quality labels is one of the major problems in emotional speech sampling and will be treated extensively in this doctorial dissertation. Modern literature records two distinct approaches to speech synthesis. One focuses on joining natural speech segments, while the other is based on the parametrization and modeling of speech segments. The main characteristic of the first approach is the ability to produce more naturally sounding artificial speech, since it uses segments of real-life recordings, while the latter produces artificial speech by modeling speech segments and using acoustic units. A major distinction between the two is in the amount of resources needed to produce working systems, with the joining approach requiring significantly larger databases than the modeling approach. This is compounded when trying to implement components of paralanguage or emotional states. In this case, the joining approach needs even more data. Since emotional states are hard to define, we can expect to have insufficient amount of quality emotion resources. Because of this, we based our development of a system for artificial synthesis of Slovenian emotional speech on parametric speech models. We obtain these models with the use of hiddenMarkov models (HMM). Building the system on the basis of speech parametrization enables us to model the speech with the use of statistical models estimated from the speech database. With modifying the parameters of these statistical models, we can change the acoustic and intonational properties of the speech as well as its length. This is performed through the processes of adapting and interpolating, which in this doctorial dissertation were also used to produce emotional states. Every system for artificial speech synthesis needs to be evaluated. As mentioned before, there are two levels of criteria: on the first level, we evaluate how understandable the speech is, and on the second we evaluate its naturalness. The evaluation can be performed with a process similar to the one used for building the emotion database: the recordings of synthesized speech are evaluated by human evaluators, who fill out questionnaires to determine whether a particular recording exhibits the required emotional states. To guarantee a reliable evaluation, we need a large number of evaluators working on large amounts of artificialy synthesized emotional speech samples. This process is categorised as a subjective evaluation, which are known to be both lengthy and costly. That is why researchers endeavour to develop faster and more objective ways of evaluating their systems. However, at the time of this writing, there are no reliable and objective techniques available which would offer faster and more efficient evaluation of systems for emotional speech synthesis. The doctorial dissertation focuses on the development of a system for artificial emotional speech synthesis in the Slovenian language. We build all the components necessary for the development of a parametric system for artificial speech synthesis. Through modifying existing methods based on hidden Markov models (HMM), we propose a new technique to build a system for Slovenian emotional speech that works with limited emotion resources. The proposed technique is based on the statistical analysis of the quality of labels applied to emotional speech recordings. Such an approach enables us to extract specific information expressed by speakers in certain emotional states using only small amounts of emotional speech. Emotionally neutral speech resources also play an important role in this process. Such recordings are usually the most abundant in emotional speech databases and thus represent the basis for building a system for artificial emotional speech synthesis. Emotionally neutral resources can be used with HMM techniques to develop a basic statistical model. Adaptation techniques allow us to transform statistical models of natural speech which scored well in evaluation into statistical models of a particular emotional state. Once we have such a model, we can use it to synthesize high quality speech in the target emotional state. Another innovation introduced by the dissertation pertains to objective evaluation of systems for emotional speech synthesis. We propose a technique based on Euclidean distance between mel-cepstral feature vectors of the original and artificial speech recordings. The dissimilarities exhibited by each artifically synthesized emotional speech recording represent the measure of its closeness to the original recording. The smaller the dissimilarities, the closer the artificial recording is to the original. If the latter is annotated with emotion labels, we can use this method of verification to automatically acquire a result that expresses whether or not the system for artificial speech synthesis produced a speech which is closest to the original. We also present the new Slovenian emotional speech database that we built from the recordings of Slovenian radio plays. The acquisition, labeling and further processing of recordings was performed with full permission from RTV Slovenija. Although the resources involve emotional states that are acted out, we presume them similar to the emotional states that arise in spontaneous speech. This presumption is based on the wider context of plays and the dialogues between the protagonists. The actors carry out their roles with a wide array of emotional states expressed by emotional speech. This means that we are not limited merely to one play, but can instead gather acoustic material of a particular actor or actress from several plays. An important factor while gathering acoustic recordings is the level of their quality. Radio plays are generally recorded with professional studio equipment, so the recordings are of sufficient quality to allow further processing. Based on the examples from one actor and one actress, the dissertation presents the methodology required for the extraction of acoustic emotion resources from radio plays. Through measures of agreement between evaluators, we present the problem of individual perception of emotional states. High quality of labeled resources was achieved by performing the evaluation with the same evaluators twice at different times. This also allowed us to check the consistency of the individuals’ perceptions of emotional states. Besides the recordings and their transcriptions, the resulting database contains emotion labels with scores expressing their quality. The fact that our emotion labels are scored puts this database among the few Slovenian emotional speech databases that contain such information. The doctorial dissertation will present all the above mentioned innovations. It is divided into six chapters. The introduction comprises a presentation of the topic, a description of the research goals determined at the start of research, and a detailed overview of the content. In the second chapter, we frame our work within the wider field of speech technologies and highlight the existing techniques that form the basis for the development of systems for artificial emotional speech synthesis. At the same time, we attempt to explain the research paths chosen by providing a broad overview of the treated research field. The new Slovenian emotional speech database, as well as the methodology used in its development, are presented in the third chapter. We focus on the difficulty of labeling emotional states in speech and underline it with the results of two separate labelings of selected emotional recordings carried out by the same evaluators. Double labeling of emotional states provides us with an insight into the evaluators’ consistency. The labels are analysed and an objective evaluation of emotional speech is given with the use of an automated system for distinguishing speaker-dependent emotional states. The fourth chapter focuses on describing the proposed method of artificial emotional speech synthesis based on the quality of labels applied to emotion resources. We start by presenting the existing method of synthesizing artificial emotional speech on the basis of HMM modeling. For the sake of clarity, we divide the process into individual components, which also allows us to emphasise the differences encountered in developing the system for emotional states synthesis. In the following part, we continue with the description of the adjustments to the process involving the developed emotional speech database, where we make good use of the quality of labels applied to emotion resources. The issue of evaluating the systems for artificial speech synthesis is treated in the fifth chapter. Here, we describe the existing subjective and objective evaluation techniques. Special attention is given to the evaluation of emotionally expressed artificial speech and the proposed method for objective evaluation is presented. Our method is based on the verification of artificially synthesized emotional speech recordings. The process of verification involves the comparison between text-dependent artificially synthesized signals and their original recordings. If the target emotional state label corresponds to the original one, we can consider the artificially synthesized recording as the closest possible approximation of the original recording. We conclude the chapter with a presentation of the results of the evaluation of the developed system for artificial Slovenian emotional speech synthesis, which was developed on the basis of emotion resources in the EmoLUKS database. In the final chapter, we summarise the more important achievements of the dissertation and attempt to evaluate them. We close the chapter by proposing directions for further research and giving guidelines from our findings for possible improvements to the systems for artificial Slovenian emotional speech synthesis.

Ključne besede:emotional speech synthesis, hidden Markov models, emotional recognition from speech, speech database EmoLUKS

Podobna dela

Podobna dela v RUL:
Podobna dela v drugih slovenskih zbirkah:

Nazaj