RSS · Spotify · Apple Podcasts · Pocket Casts
Below are some highlights from our conversation as well as links to the papers, people, and groups referenced in the episode.
Some highlights from our conversation
“Neuroscience is such an open field still, it’s so nascent, so in its early stages, that there are so many important questions. My perspective is that we really need to sync theory and experiment; whatever questions we address, we really need to think about how we can create this loop between theory and experiments so that we can test the predictions made by our models and then come back and correct our models based on the experiments and findings.”
“If you’re thinking about spatial navigation, there’s also this question of generalization of spatial maps across so many different environments that we encounter in our daily lives. I lived in India and then I lived in Canada and now I’m in the U.S. And I still can go back to India and I’ll remember the map of the house that I was living in. […] That was another question that I was fascinated by…this high level question of how we are so good at generalizing across spaces.”
“The basic idea is that if you separate the memory from features, the memory part of it doesn’t need to have an explicit, arbitrary information content. You could use a pre-defined set of states as attractors that form fixed points of dynamical systems. Because these are pre-chosen, they don’t really have any information content in them, so the upper bounds in terms of information theory stop applying and you can store an exponential number of predefined fixed states in this.”
“I think the field has been biased towards experiments maybe also because of the kind of awards you get…if you find grid cells, you’ll get a Nobel prize, which is cool. But also I’m not sure whether this is the only factor that’s biasing, but traditionally the field has been biased towards experiments a lot, and I feel like definitely we need a lot of theorists to propose experiments so that they can be validated. Because without good theoretical ideas, I don’t think we can make sense of the data we are seeing.”
Referenced in this podcast
- Chris Eliasmith’s TED talk: How to Build a Brain
- Realistic neurons can compute the operations needed by quantum probability theory and other vector symbolic architectures
- The Centre for Theoretical Neuroscience at University of Waterloo
- Holographic Reduced Representation: Distributed Representation for Cognitive Structures (Lecture Notes Book 150) by Tony A. Plate
- Complexity in Information Theory by Yaser S. Abu-Mostafa
- Content Addressable Memory Without Catastrophic Forgetting by Heteroassociation with a Fixed Scaffold
- Recency Bias (Tversky & Kahneman)
- MIT Brain+Cognitive Sciences
- Prof. Joshua Tenenbaum
- Prof. Ila Fiete
- The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep by Ila Fiete
- Ila Fiete’s NeurIPS 2019 talk
- Accurate Path Integration in Continuous Attractor Network Models of Grid Cells
- Memory palace
- Control theory
- Prof. Jean-Jacques Slotine
- Applied Nonlinear Control by Jean-Jacques Slotine
- One shot learning of simple visual concepts by Josh Tenenbaum, et al.
- Neurotechnology
- Haim Sompolinsky
- Larry Abbott
- ICoN Center
Thanks to Tessa Hall for editing the podcast.