RSS · Spotify · Apple Podcasts · Pocket Casts
Some highlights from our conversation
“I am very skeptical of the notion of a disentangled representation as such. In particular, I think that the notion of disentanglement has to be much more specific to the context and the goal than I usually see people thinking about it.”
“Under certain training conditions, ImageNet models are biased towards using texture to classify objects much more than humans are…that’s partly because of the training procedures, like if you do like a bunch of tiny crops of images in training, that kind of biases models towards texture because it’s the only thing there…but it’s also because ImageNet is just set up with these very strong correlations in a way that maybe human experience isn’t—because we see a lot of cartoon lions as well as real lions, and we see things in more variable backgrounds, and so on. So I think if you want to generalize well to the ImageNet tests, you might actually not want to solve the problem in the same way as humans necessarily because there is real signal in those textures that humans are ignoring, slightly to their detriment.”
“We basically tried to argue that AI should write more review articles and do more meta analysis and that psychology should basically just publish more incremental papers.”
“Language as a means of compression might play a role in [making us robust to changes in our representations]. It has some nice properties for memory; it’s a relatively small thing to remember a description of something, and it’s relatively resilient to noise in a way that a continuous representation maybe isn’t. So maybe both of those properties are nice if you want to find a way of representing something in a way that you’ll remember it a long time in the future when your representations have shifted slightly.”
“Ultimately I’d like to train an agent in an environment like a human kid has. And in particular, one of the things we talked about a bunch of the symbolic behavior paper was the value of the social interactions where you come to agree on meaning about things. Kids do this all the time with creating imaginary worlds with their friends or their family, and having conversations with their parents about what things mean and what things are. And so I’d really love to train agents in environments that are more socially interactive in that way.”
Referenced in this podcast
- The deep linear networks paper: An analytic theory of generalization dynamics and transfer learning in deep linear networks by Lampinen & Ganguli
- Understanding deep learning requires rethinking generalization by Zhang et al.
- The “odd one out” paper: Tell me why! — Explanations support learning of relational and causal structure by Lampinen et al.
- What shapes feature representations? Exploring datasets, architectures, and training by Hermann & Lampinen
- Brendan Lake, whose work on cognitive abilities that elude AI is a source of inspiration
- George Lakoff and the book Metaphors We Live By
- Psychologist Susan Goldin-Meadow (accidentally referred to as Susan Gelman in the podcast) and her work on the role of hand gestures
- Professor Chelsea Finn
- DRAW: A Recurrent Neural Network For Image Generation by Gregor et al.
- The Origins and Prevalence of Texture Bias in Convolutional Neural Networks by Hermann et al.
- Psychologist Linda Smith
- The opinion piece Publishing fast and slow: A path toward generalizability in psychology and AI by Lampinen et al. which was a response to The generalizability crisis by Yarkoni
- Building Machines That Learn and Think Like People by Lake et al.
- Can Wikipedia Help Offline Reinforcement Learning? by Reid et al.
- Jerry Fodor and the language of thought hypothesis
- Symbolic Behaviour in Artificial Intelligence by Santoro et al.
- Charles Sanders Peirce, who described symbols as being a three-part relationship
- The book A Man Without Words by Susan Schaller
- Psychologist Gary Lupyan
- The HCAM paper: Towards mental time travel: a hierarchical memory for reinforcement learning agents by Lampinen et al.
Thanks to Tessa Hall for editing the podcast.