SLFI group

Spoken Language Forensics & Informatics (SLFI) group

Speech Lab, LTRC, IIIT-Hyderabad








 
Our Objective


  • As a part of spoken language forensics, we automatically extract the signatures that indicate how the spoken content was said? under non-native context

  • As a part of spoken language informatics, we develop automatic methods to build robust models for spoken content information retrieval, mainly under non-native context, using spoken language forensics

  • Using spoken language forensics and informatics, we develop personalized virtual tools for language learning with spoken interactions and for speech based therapy to patients with speech or cognitive disorders

  • To build spoken language forensics and informatics models, we develop automatic data collection, validation and generation methods to reduce the manual intervention in all stages our analysis


  • Our Works

    Resource constrained non-native spoken error analysis (click here for the details)

    Publications: We are working
    Funding agencies:

    Resource constrained prosodic analysis of non-native speech (click here for the details)

    Publications: NCC 2021; Interspeech 2021
    Funding agencies: KCIS Fellowship, IIIT Hyderabad;

    Automatic spoken data validation methods under non-native context (click here for the details)

    Publications: We are working
    Funding agencies: IHub-Data, IIIT Hyderabad;

    Label Unaware Speech Intelligibility Detection and Diagnosis under Spontaneous Speech (click here for the details)

    Publications: We are working
    Funding agencies: IHub-Data, IIIT Hyderabad;

    Pronunciation assessment and semi supervised feedback prediction for spoken English tutoring
    (click here for the details)


    Publications: ICASSP 2016, 2017, 2018; Interspeech 2018, 2019; SLaTE 2019; Indicon 2018; Oriental COCOSDA 2019; Elseveir Speech Communication 2016; JASA 2018, 2019.
    Funding agencies: DST; PM fellowship;

    Automatic synthesis of articulatory videos from speech (click here for the details)


    Publications: ICASSP 2018; Interspeech 2019.
    Funding agencies:


    Our Tools

  • voisTUTOR: Online tool for learning spoken English pronunciation.

  • SPIRE-ABC: Online tool for acoustic-unit boundary correction.

  • Pronunciation quality analysis: Online tool for annotating the pronunciation quality of the learners spoken English pronunciation.

  • Phoneme transcribing interface: Online tool for transcribing the phonemes of spoken data.