• Slide 1
  • Slide 2
  • slide 3
  • slide 4

Welcome to The Centre for Multimodal AI

The Centre for Multimodal AI consolidates AI research in the School of Electronic Engineering and Computer Science. It builds on the expertise of world-leading academics in the school with emphasis on the development of Machine Learning algorithms, systems and applications for the Analysis and Synthesis of Multimodal Information such as Audio, Images, Videos, and Text, and on the development of AI methodologies in the domains of Games and Decision Support Systems.

The objective of the centre is to contribute to the development of AI methods and systems that will shape the future of our economy and society, striving not only for scientific excellence but also at setting and addressing research challenges for the benefit of our society. This includes challenges around developing AI methods and systems that are Trustworthy, Ethical and Responsible, but also efficient and capable of addressing some of the major challenges in the domains of Health, Education and Digital Economy.

The centre comprises more than 50 academics and 150 researchers, hosted across 6 research entities, namely the Centre for Digital Music, the Computer Vision group, the Multimedia and Vision group, the Computational Linguistics lab, the Game AI group, and the Machine Intelligence and Decision Systems group. Several members of the Centre are Fellows of the Turing Institute and/or of the Digital Environment Research Institute (DERI).


Recent Publications

  • Hunte JL, Neil M, Fenton NE, Osman M and Bechlivanidis C (2024). The effect of risk communication on consumers’ risk perception, risk tolerance and utility of smart and non-smart home appliances. Safety Science, Elsevier vol. 174 
  • Haleem MS, Cisuelo O, Andellini M, Castaldo R, Angelini M, Ritrovato M, Schiaffini R, Franzese M and Pecchia L (2024). A Self-Attention Deep Neural Network Regressor for real time blood glucose estimation in paediatric population using physiological signals. Biomedical Signal Processing and Control, Elsevier vol. 92 
  • Li Y, Yuan R, Zhang G, Ma Y, Chen X, Yin H, Xiao C, Lin C, Ragni A, Benetos E, Gyenge N, Dannenberg R, Liu R, Chen W, Xia G, Shi Y, Huang W, Wang Z, Guo Y and Fu J (2024). MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training. International Conference on Learning Representations (ICLR) 7 May 2024 - 11 May 2024

View more publications »