The key theme of my research to date has been the design of intelligent systems for processing and modelling temporal data. I have explored neural network architectures, in terms of biologically inspired modelling and computationally efficient machine learning, on a variety of data structures from numerical, audio, images, video, to three-dimensional feature data from virtual and augmented reality environments. From my independent research career to date, my most significant scientific contributions concern:
- Deep learning architectures, algorithms and applications for audio, speech and natural language understanding
- Application of intelligent systems to immersive virtual, augmented and mixed environments
- Development of biologically inspired deep learning for speech recognition and enhancement
- Privacy preserving intelligent systems for audio processing
Most recent research
- Roman Shrestha, Cornelius Glackin, Julie Wall, Nigel Cannings, Marvin Rajwadi, Satya Kada, James Laird, Thea Laird and Chris Woodruff, Speaker Recognition using Multiple X-Vector Speaker Representations with Two-Stage Clustering and Outlier Detection Refinement, 7th IEEE Cyber Science and Technology Congress (CyberSciTech), 2022.
- Nossier, S A., Wall, J., Moniri, M., Glackin, C., Cannings, Convolutional Recurrent Smart Speech Enhancement Architecture for Hearing Aids, INTERSPEECH 2022.
- Nossier, S A., Wall, J., Moniri, M., Glackin, C., Cannings, A two-stage DNN for speech enhancement and reconstruction in the frequency and time domains, IEEE International Joint Conference on Neural Networks (IJCNN), 2022.
- Poobalasingam, V., Cannings, N., Glackin, C., Wall, J., Sharif S., Moniri, M., A Mixed Reality Approach for dealing with the Video Fatigue of Online Meetings, 7th International XR Conference, 2022.