News
1st AI: Brains and Bits Symposium
Centre for Fundamentals of AI and Computational Theory13 April 2026
The first QMUL AI: Brains and Bits symposium was held on 13 April 2026. It brought together researchers from across Science and Engineering to discuss the Fundamentals of AI including issues of how AI works and how it should work. It included talks from investigators from Biology, the Blizzard Institute, Physics, Maths, Electronic Engineering and Computer Science as well as whole and small group discussion sessions, so brought together wide-ranging disciplinary viewpoints to the fundamentals of AI.
Topics covered ranged from animal based neuroscience and the fundamental principles of human learning to collective AI and the social behaviour of machines.
Abhishek Banerjee first spoke about the neuroscience behind active learning in mammals, driven by life challenges, resources and social interaction. Research on, for example, rats allows parts of the brain to be switched off to help us deeply understand how mammalian brains work and so how AI could. Now computer science models can help better understand this, and how this might lead to better understandding of future agentic AI. Early models were based on feed-forward mechanisms from sensory input to flexible bahaviour. Now models are bidirectional and this happens at the cell neuron leval as well as higher levels.
Iran Roman, by contrast, spoke about models of machine learning that learn dynamically. Modern machine learning is based on back propagation but older models were more sophisticated. Rather than having a passive learning phase followed by an action phase when the learning is used, in dynamic models learning continues to happen so is reactive. Results show this can be as effective as current algorithms and has potential to be more flexible approach for the future as it is interprettable controllable and optimizable.
Andrea Benucci then discussed the way the human brain works from a psychology point of view and how the brain processes visual perception experiences through two separate streams, one focussing on the 'what' and another on the 'why' that follow different pathways in the brain. Our brains also take input about both bodily motion and eye motion (top-down motor-signals) to maintain a stable perception of what is being seen (a bottom-up visualise signal), rather than just working from a visual signal. This understanding has applications in embodied agent systems such as robotic and self-driving car systems, giving ideas for new architectures so new ways for AI to work.
Vito Latora finished the first session with a network science take on the fundamentals of AI talking about how there is limited work on collective intelligence and using the wisdom of the crowds. This linked to later points made by David Berman, of aiming for social intelligence rather than ever aiming for "bigger brains" in AI. The former is likely to be the long term way forward, so more research is needed understanding the fundamentals such as how behaviour spreads, what collective AI will look like and how might human-AI collective intelligence might work.
In the afternoon, Chris White first spoke about the ongoing importance of meta-science and how AI was able to contribute. What is needed, for example, to understand public attitudes to science is to understand people's stances on issues from social media posts and the like. What is generally done is instead sentiment analysis which is consistently bad at predicting stance.
David Berman outlined his personal journey moving from studying string theory to working in industry applying AI and physics-informed approaches, including for formal mathematics. He raised the issue of understanding the scaling properties of systems: do we scale communication speeds, architecture or number of people connected. For example, if we double the speed of communicating do we double the complexity of the emergent society? The rates that AI is scaling currently are essentially insane - doubling every 7 months. Projects need to take the rapid changes into account.
Boris Khoruzhenko then discussed modelling of complex systems and how interactions can be replaced by random matrices or random functions to investigate them, answering questions such as whether a large system will be stable eg will small disturbances only lead to small changes.
Finally, before the participants split into small groups to discuss issues that emerged, Adrian Baule talked about the application of statistical mechanics of non-equilibrium systems and how behaviour emerges from the behaviour of the components. AI models can potentially help understand such systems, as well as it being something that needs to be understood about the properties that emerge of network based and social AI architectures.
The separate talks led to a great deal of interdisiciplinary discussion about the issues arising with potential for a range of future collaborations.
People: Mark SANDLER
Contact: Mark SandlerEmail: mark.sandler@qmul.ac.uk
Updated by: Paul Curzon
