Research

Our research has the following objectives: 

  1. Create privacy-preserving, deep transfer learning based audio-visual speech enhancement algorithms. We will develop a new generation of evolved context-aware transfer learning models and homomorphic implementations, and alternative deep cognitive neural network architectures. 
  2. Integrate audio-visual speech enhancement algorithms with wireless-based multi-modal models for lip-reading and cognitive load prediction for maximising end-user uptake. We aim to deliver a transformative framework for effective speech enhancement and optimised end-user cognitive load, regardless of the nature of the environmental condition.
  3. Prototype off-chip real-time audio-visual hearing aid through algorithmic design and development of a low complexity, energy-efficient and cognitive internet of things transceiver, with a secure end-to-end system, and smart context-aware connectivity.
  4. Prototype on-chip real-time audio-visual hearing aid with a full range of secure, context-aware audio-visual functionality for embedded mobile/wireless sensors in 5G and beyond.
  5. Develop a new large noisy audio-visual corpus in speaker and noise independent scenarios, speech-in-noise listening tests, and techniques for predicting objective audio-visual speech quality/intelligibility and clinical evaluation.
  6. Test real-time hearing aid (on/off-chip) with an environment-friendly, low CO2-emission smart care home testbed, and pilot clinical evaluation with volunteers with normal hearing and hearing loss.
  7. COG-MHEAR will explore the public perception of hearing loss and the impact of hearing loss on health and wellbeing, hearing technology, and hearing care through novel artificial intelligence analysis. You can read the privacy notice at: link