Understanding speech and scene with ears and eyes

Akronym

USSEE

Bidragets beskrivning

One of the biggest challenges of AI is to develop computational abilities to understand speech and video scenes as effectively as we humans do it. This project aims to develop multimodal techniques for understanding and interpreting aural and visual inputs. These novel machine learning based techniques will first learn representations of visual stimuli and human speech in various abstraction levels and then cross-modal correlations between the representations. This can be achieved by devising new network structures and utilizing diverse uni- and multimodal datasets for training of the various parts of the model first separately and then jointly. As a result, we believe, the accuracy of both speech recognition, visual description and interpretation will improve.
Visa mer

Startår

2022

Slutår

2024

Beviljade finansiering


Mikko Kurimo Orcid -palvelun logo
329 586 €

Rollen i Finlands Akademis konsortium

Övriga parter i konsortiet

Partner
Aalto-universitetet (345791)
326 920 €

Finansiär

Finlands Akademi

Typ av finansiering

Akademiprojekt med särskild inriktning

Övriga uppgifter

Finansieringsbeslutets nummer

345790

Vetenskapsområden

Data- och informationsvetenskap

Forskningsområden

Laskennallinen data-analyysi

Identifierade teman

languages, linguistics, speech